Nov 28 20:49:19 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 20:49:19 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:19 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 20:49:20 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 20:49:20 crc kubenswrapper[4957]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 20:49:20 crc kubenswrapper[4957]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 20:49:20 crc kubenswrapper[4957]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 20:49:20 crc kubenswrapper[4957]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 20:49:20 crc kubenswrapper[4957]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 20:49:20 crc kubenswrapper[4957]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.637576 4957 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.643997 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644042 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644054 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644064 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644073 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644081 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644090 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644098 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644106 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644114 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644123 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644132 4957 feature_gate.go:330] unrecognized feature gate: Example Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644139 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644147 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644156 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644165 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644173 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644181 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644189 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644198 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644232 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644243 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644253 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644263 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644278 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644288 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644297 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644304 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644312 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644320 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644328 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644336 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644344 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644366 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644374 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644383 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644391 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644399 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644407 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644415 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644423 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644431 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644441 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644452 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644462 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644472 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644482 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644493 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644501 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644509 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644517 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644527 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644538 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644547 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644555 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644563 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644571 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644578 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644586 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644593 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644601 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644609 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644620 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644654 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644665 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644673 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644685 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644693 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644701 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644709 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.644717 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645238 4957 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645273 4957 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645293 4957 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645308 4957 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645322 4957 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645331 4957 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645370 4957 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645386 4957 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645397 4957 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645409 4957 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645422 4957 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645437 4957 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645450 4957 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645460 4957 flags.go:64] FLAG: --cgroup-root="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645469 4957 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645478 4957 flags.go:64] FLAG: --client-ca-file="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645488 4957 flags.go:64] FLAG: --cloud-config="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645497 4957 flags.go:64] FLAG: --cloud-provider="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645505 4957 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645517 4957 flags.go:64] FLAG: --cluster-domain="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645526 4957 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645535 4957 flags.go:64] FLAG: --config-dir="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645544 4957 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645554 4957 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645565 4957 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645574 4957 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645584 4957 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645594 4957 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645603 4957 flags.go:64] FLAG: --contention-profiling="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645612 4957 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645621 4957 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645630 4957 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645640 4957 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645651 4957 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645660 4957 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645671 4957 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645680 4957 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645689 4957 flags.go:64] FLAG: --enable-server="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645699 4957 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645709 4957 flags.go:64] FLAG: --event-burst="100" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645719 4957 flags.go:64] FLAG: --event-qps="50" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645728 4957 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645737 4957 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645746 4957 flags.go:64] FLAG: --eviction-hard="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645757 4957 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645765 4957 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645774 4957 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645786 4957 flags.go:64] FLAG: --eviction-soft="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645796 4957 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645805 4957 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645814 4957 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645823 4957 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645831 4957 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645840 4957 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645849 4957 flags.go:64] FLAG: --feature-gates="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645860 4957 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645869 4957 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645878 4957 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645887 4957 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645897 4957 flags.go:64] FLAG: --healthz-port="10248" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645906 4957 flags.go:64] FLAG: --help="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645914 4957 flags.go:64] FLAG: --hostname-override="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645923 4957 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645932 4957 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645941 4957 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645950 4957 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645959 4957 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645968 4957 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645977 4957 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645986 4957 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.645997 4957 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646006 4957 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646016 4957 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646025 4957 flags.go:64] FLAG: --kube-reserved="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646034 4957 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646043 4957 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646053 4957 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646061 4957 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646070 4957 flags.go:64] FLAG: --lock-file="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646079 4957 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646088 4957 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646097 4957 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646130 4957 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646140 4957 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646150 4957 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646159 4957 flags.go:64] FLAG: --logging-format="text" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646167 4957 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646177 4957 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646186 4957 flags.go:64] FLAG: --manifest-url="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646195 4957 flags.go:64] FLAG: --manifest-url-header="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646234 4957 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646244 4957 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646255 4957 flags.go:64] FLAG: --max-pods="110" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646264 4957 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646274 4957 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646282 4957 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646291 4957 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646300 4957 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646310 4957 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646320 4957 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646342 4957 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646352 4957 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646360 4957 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646369 4957 flags.go:64] FLAG: --pod-cidr="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646378 4957 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646391 4957 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646400 4957 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646409 4957 flags.go:64] FLAG: --pods-per-core="0" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646418 4957 flags.go:64] FLAG: --port="10250" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646428 4957 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646437 4957 flags.go:64] FLAG: --provider-id="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646446 4957 flags.go:64] FLAG: --qos-reserved="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646455 4957 flags.go:64] FLAG: --read-only-port="10255" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646464 4957 flags.go:64] FLAG: --register-node="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646473 4957 flags.go:64] FLAG: --register-schedulable="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646482 4957 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646497 4957 flags.go:64] FLAG: --registry-burst="10" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646506 4957 flags.go:64] FLAG: --registry-qps="5" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646515 4957 flags.go:64] FLAG: --reserved-cpus="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646525 4957 flags.go:64] FLAG: --reserved-memory="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646536 4957 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646546 4957 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646555 4957 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646564 4957 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646572 4957 flags.go:64] FLAG: --runonce="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646581 4957 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646591 4957 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646599 4957 flags.go:64] FLAG: --seccomp-default="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646608 4957 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646617 4957 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646627 4957 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646637 4957 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646647 4957 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646656 4957 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646665 4957 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646673 4957 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646682 4957 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646692 4957 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646701 4957 flags.go:64] FLAG: --system-cgroups="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646709 4957 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646723 4957 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646732 4957 flags.go:64] FLAG: --tls-cert-file="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646741 4957 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646752 4957 flags.go:64] FLAG: --tls-min-version="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646762 4957 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646770 4957 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646780 4957 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646789 4957 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646798 4957 flags.go:64] FLAG: --v="2" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646809 4957 flags.go:64] FLAG: --version="false" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646821 4957 flags.go:64] FLAG: --vmodule="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646832 4957 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.646842 4957 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647042 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647053 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647063 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647072 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647081 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647089 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647098 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647106 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647114 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647123 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647132 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647141 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647149 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647157 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647165 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647173 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647181 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647189 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647196 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647204 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647237 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647245 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647253 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647261 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647269 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647277 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647284 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647292 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647300 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647307 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647315 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647323 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647331 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647338 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647346 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647354 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647362 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647370 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647379 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647387 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647395 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647402 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647413 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647424 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647434 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647443 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647467 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647478 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647489 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647498 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647509 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647517 4957 feature_gate.go:330] unrecognized feature gate: Example Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647524 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647532 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647540 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647548 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647556 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647564 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647571 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647579 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647597 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647607 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647614 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647622 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647629 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647637 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647646 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647653 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647661 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647671 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.647681 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.647918 4957 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.658932 4957 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.659003 4957 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659134 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659157 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659167 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659175 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659185 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659193 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659201 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659234 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659243 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659253 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659262 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659270 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659278 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659286 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659294 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659302 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659313 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659323 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659331 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659340 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659349 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659357 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659365 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659374 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659383 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659392 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659400 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659408 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659416 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659425 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659432 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659441 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659449 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659457 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659466 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659474 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659484 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659496 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659505 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659514 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659522 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659530 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659538 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659546 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659554 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659562 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659570 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659578 4957 feature_gate.go:330] unrecognized feature gate: Example Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659586 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659594 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659602 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659612 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659621 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659629 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659640 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659650 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659659 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659668 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659676 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659684 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659693 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659701 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659709 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659716 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659724 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659731 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659739 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659747 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659755 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659764 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659773 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.659785 4957 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.659999 4957 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660011 4957 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660022 4957 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660033 4957 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660043 4957 feature_gate.go:330] unrecognized feature gate: Example Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660054 4957 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660065 4957 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660074 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660082 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660091 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660100 4957 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660109 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660117 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660125 4957 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660132 4957 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660140 4957 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660148 4957 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660156 4957 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660164 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660172 4957 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660180 4957 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660188 4957 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660196 4957 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660205 4957 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660234 4957 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660242 4957 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660250 4957 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660257 4957 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660266 4957 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660274 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660281 4957 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660289 4957 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660298 4957 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660307 4957 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660315 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660323 4957 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660331 4957 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660340 4957 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660347 4957 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660355 4957 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660363 4957 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660371 4957 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660380 4957 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660389 4957 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660398 4957 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660406 4957 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660414 4957 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660423 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660430 4957 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660438 4957 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660446 4957 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660453 4957 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660461 4957 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660471 4957 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660480 4957 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660489 4957 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660497 4957 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660505 4957 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660514 4957 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660523 4957 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660530 4957 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660538 4957 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660546 4957 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660554 4957 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660561 4957 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660569 4957 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660577 4957 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660585 4957 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660592 4957 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660600 4957 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.660609 4957 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.660621 4957 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.660945 4957 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.665276 4957 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.665409 4957 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.666305 4957 server.go:997] "Starting client certificate rotation" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.666355 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.666519 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 16:21:09.634976501 +0000 UTC Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.666601 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.674005 4957 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.675182 4957 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.677330 4957 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.687121 4957 log.go:25] "Validated CRI v1 runtime API" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.710648 4957 log.go:25] "Validated CRI v1 image API" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.712519 4957 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.715010 4957 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-20-45-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.715038 4957 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.727929 4957 manager.go:217] Machine: {Timestamp:2025-11-28 20:49:20.726678946 +0000 UTC m=+0.195326865 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:954acfd8-81a0-40d5-975d-9c927901b7d2 BootID:57605594-99dc-4010-aedb-e801960f2510 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1e:c7:84 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1e:c7:84 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9f:c2:fc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b5:46:f4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:35:3c:e7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9d:17:3e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:94:20:36:e0:e8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7e:b1:cc:69:80:64 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.728157 4957 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.728334 4957 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.728976 4957 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729143 4957 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729177 4957 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729386 4957 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729400 4957 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729651 4957 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729697 4957 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729906 4957 state_mem.go:36] "Initialized new in-memory state store" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.729995 4957 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.730645 4957 kubelet.go:418] "Attempting to sync node with API server" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.730665 4957 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.730685 4957 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.730698 4957 kubelet.go:324] "Adding apiserver pod source" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.730710 4957 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.733142 4957 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.733591 4957 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.733644 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.733652 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.733739 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.733742 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.734533 4957 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735105 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735133 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735142 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735151 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735167 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735176 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735185 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735199 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735231 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735243 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735259 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735271 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.735566 4957 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.736959 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.739019 4957 server.go:1280] "Started kubelet" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.739692 4957 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.739635 4957 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.741807 4957 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.742752 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.742779 4957 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.742951 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:48:13.139921158 +0000 UTC Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.742980 4957 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 108h58m52.396942544s for next certificate rotation Nov 28 20:49:20 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.743459 4957 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.743470 4957 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.743913 4957 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.745326 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="200ms" Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.745567 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.745709 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.744279 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c46c3c9b43968 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 20:49:20.73899044 +0000 UTC m=+0.207638349,LastTimestamp:2025-11-28 20:49:20.73899044 +0000 UTC m=+0.207638349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.745774 4957 factory.go:55] Registering systemd factory Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.745920 4957 factory.go:221] Registration of the systemd container factory successfully Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746235 4957 factory.go:153] Registering CRI-O factory Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746248 4957 factory.go:221] Registration of the crio container factory successfully Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746294 4957 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746317 4957 factory.go:103] Registering Raw factory Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746330 4957 manager.go:1196] Started watching for new ooms in manager Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746819 4957 manager.go:319] Starting recovery of all containers Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.746991 4957 server.go:460] "Adding debug handlers to kubelet server" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.743622 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.761143 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.761623 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.761760 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.761840 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.762017 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.763317 4957 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.763497 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.763596 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.763698 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.763791 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.763883 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764051 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764185 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764435 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764525 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764612 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764688 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764762 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764855 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.764946 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766519 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766604 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766635 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766661 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766686 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766707 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766732 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766800 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766824 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766848 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766869 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766891 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766911 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766940 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766962 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.766985 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767006 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767027 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767051 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767093 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767125 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767152 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767172 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767193 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767243 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767266 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767288 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767308 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767330 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767351 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767372 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767395 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767421 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767462 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767490 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767513 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767576 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767603 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767633 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767656 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767609 4957 manager.go:324] Recovery completed Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767708 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767829 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767853 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767882 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767907 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767941 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767975 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.767996 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768025 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768056 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768088 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768116 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768138 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768158 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768184 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768274 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768297 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768320 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768341 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768364 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768385 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768408 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768429 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768449 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768494 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768518 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768548 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768574 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768643 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768667 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.768690 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769127 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769152 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769176 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769251 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769274 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769298 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769321 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769344 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769367 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769399 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769424 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769446 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769469 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769505 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769535 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769561 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769587 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769612 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769637 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769661 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769684 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769708 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769733 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769756 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769778 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769800 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769820 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769841 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769863 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769886 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769907 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769932 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769953 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.769974 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770020 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770045 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770072 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770102 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770129 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770158 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770183 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770236 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770258 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770279 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770305 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770325 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770348 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770373 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770394 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770416 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770440 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770463 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770484 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770506 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770527 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770547 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770573 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770595 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770617 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770640 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770661 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770685 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770707 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770734 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770755 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770777 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770801 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770821 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770842 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770863 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770884 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770908 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770953 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770974 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.770994 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771018 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771041 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771072 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771102 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771132 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771154 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771175 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771195 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771244 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771268 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771290 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771310 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771333 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771354 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771378 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771399 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771422 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771444 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771465 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771489 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771513 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771536 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771557 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771579 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771603 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771624 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771644 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771669 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771690 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771713 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771736 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771759 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771779 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771801 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771822 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771845 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771868 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771891 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771913 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771934 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771956 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.771980 4957 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.772000 4957 reconstruct.go:97] "Volume reconstruction finished" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.772015 4957 reconciler.go:26] "Reconciler: start to sync state" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.778474 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.782352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.782397 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.782411 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.784170 4957 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.784190 4957 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.784222 4957 state_mem.go:36] "Initialized new in-memory state store" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.795173 4957 policy_none.go:49] "None policy: Start" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.796101 4957 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.796134 4957 state_mem.go:35] "Initializing new in-memory state store" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.807826 4957 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.811478 4957 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.811680 4957 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.811731 4957 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.811788 4957 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 20:49:20 crc kubenswrapper[4957]: W1128 20:49:20.812317 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.812362 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.848330 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.849137 4957 manager.go:334] "Starting Device Plugin manager" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.849226 4957 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.849340 4957 server.go:79] "Starting device plugin registration server" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.849901 4957 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.849918 4957 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.850432 4957 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.850601 4957 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.850616 4957 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.860626 4957 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.912360 4957 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.912558 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.913826 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.913866 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.913876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.914041 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.914277 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.914335 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.915081 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.915103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.915111 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.915292 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.915407 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.915490 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916073 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916111 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916721 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.916875 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.917004 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.917047 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.917850 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.917872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.917880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.918107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.918128 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.918136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.918284 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.918454 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.918519 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919164 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919351 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919370 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919754 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.919787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.920082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.920194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.920226 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.945875 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="400ms" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.952229 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.955723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.955762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.955771 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.955800 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 20:49:20 crc kubenswrapper[4957]: E1128 20:49:20.956304 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.973984 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974037 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974070 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974103 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974135 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974164 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974229 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974259 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974279 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974295 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974340 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974374 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974391 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974413 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:20 crc kubenswrapper[4957]: I1128 20:49:20.974436 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075446 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075561 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075611 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075656 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075664 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075701 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075681 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075750 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075759 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075765 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075780 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075790 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075824 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075829 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075911 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075961 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.075957 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076009 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076028 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076043 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076059 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076074 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076079 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076092 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076111 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076138 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076138 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.076243 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.157107 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.158462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.158518 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.158530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.158558 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 20:49:21 crc kubenswrapper[4957]: E1128 20:49:21.159077 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.236680 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.243196 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.263565 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.269031 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.272416 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:21 crc kubenswrapper[4957]: W1128 20:49:21.281370 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-685abac90bae1c2894b6b014080a4ea525c40606974bcf1a76e12bd8d1d08d44 WatchSource:0}: Error finding container 685abac90bae1c2894b6b014080a4ea525c40606974bcf1a76e12bd8d1d08d44: Status 404 returned error can't find the container with id 685abac90bae1c2894b6b014080a4ea525c40606974bcf1a76e12bd8d1d08d44 Nov 28 20:49:21 crc kubenswrapper[4957]: W1128 20:49:21.281878 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6fe92e7d0ef9e3c51409270fd56fa9ad259724f43645bb8c13aeb19a4c9d739c WatchSource:0}: Error finding container 6fe92e7d0ef9e3c51409270fd56fa9ad259724f43645bb8c13aeb19a4c9d739c: Status 404 returned error can't find the container with id 6fe92e7d0ef9e3c51409270fd56fa9ad259724f43645bb8c13aeb19a4c9d739c Nov 28 20:49:21 crc kubenswrapper[4957]: W1128 20:49:21.293496 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-da859baf6cbd9de5b09ab1b1bcf9b579f1c2ee704aa1007b2f4fdbe87a1d35d3 WatchSource:0}: Error finding container da859baf6cbd9de5b09ab1b1bcf9b579f1c2ee704aa1007b2f4fdbe87a1d35d3: Status 404 returned error can't find the container with id da859baf6cbd9de5b09ab1b1bcf9b579f1c2ee704aa1007b2f4fdbe87a1d35d3 Nov 28 20:49:21 crc kubenswrapper[4957]: W1128 20:49:21.295660 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0c1c01b65a20b39d40f7a39cf3025e35d6c3600b1cc2d17f545a24e80be04a22 WatchSource:0}: Error finding container 0c1c01b65a20b39d40f7a39cf3025e35d6c3600b1cc2d17f545a24e80be04a22: Status 404 returned error can't find the container with id 0c1c01b65a20b39d40f7a39cf3025e35d6c3600b1cc2d17f545a24e80be04a22 Nov 28 20:49:21 crc kubenswrapper[4957]: E1128 20:49:21.346801 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="800ms" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.559265 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.560142 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.560182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.560194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.560233 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 20:49:21 crc kubenswrapper[4957]: E1128 20:49:21.560714 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Nov 28 20:49:21 crc kubenswrapper[4957]: W1128 20:49:21.590729 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:21 crc kubenswrapper[4957]: E1128 20:49:21.591045 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.738093 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.817889 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0" exitCode=0 Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.817942 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.818782 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c1c01b65a20b39d40f7a39cf3025e35d6c3600b1cc2d17f545a24e80be04a22"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.818982 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.820156 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.820186 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da859baf6cbd9de5b09ab1b1bcf9b579f1c2ee704aa1007b2f4fdbe87a1d35d3"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.820343 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.820370 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.820384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.821819 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.822553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.822590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.822603 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.823025 4957 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="709e2c92dd4a41ed982aac9849c5ee21ccdeef23c49582c2ddcc5d6281359f73" exitCode=0 Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.823058 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"709e2c92dd4a41ed982aac9849c5ee21ccdeef23c49582c2ddcc5d6281359f73"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.823121 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0088d0334c151ffe65f8fb338a36cfd899aa508deefdfeee7684b64de9c017a2"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.823319 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.824661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.824689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.824700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.825092 4957 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1" exitCode=0 Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.825192 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.825256 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"685abac90bae1c2894b6b014080a4ea525c40606974bcf1a76e12bd8d1d08d44"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.825357 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.826238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.826275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.826289 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.827240 4957 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef" exitCode=0 Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.827280 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.827342 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6fe92e7d0ef9e3c51409270fd56fa9ad259724f43645bb8c13aeb19a4c9d739c"} Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.827445 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.828523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.828555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:21 crc kubenswrapper[4957]: I1128 20:49:21.828572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:22 crc kubenswrapper[4957]: E1128 20:49:22.148551 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="1.6s" Nov 28 20:49:22 crc kubenswrapper[4957]: W1128 20:49:22.163986 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Nov 28 20:49:22 crc kubenswrapper[4957]: E1128 20:49:22.164085 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.361045 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.362098 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.362135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.362145 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.362171 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.789509 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.834897 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.834945 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.834959 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.835061 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.835961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.836023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.836035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.837526 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d770eac66e6fab266c3b3fb326f244ed3e485e4b546eef8cbaacc011c3dfb9c4"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.837604 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.838906 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.838958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.838973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.841371 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.841397 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.841407 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.841415 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.841424 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.841484 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.842246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.842282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.842290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.844820 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.844844 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.844857 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.844939 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.845716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.845738 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.845750 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.847048 4957 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a53d12573e5dbb6f09df206db818862e2a7923f7da15b75c45c952cc807a5b2e" exitCode=0 Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.847126 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a53d12573e5dbb6f09df206db818862e2a7923f7da15b75c45c952cc807a5b2e"} Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.847267 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.847976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.848003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:22 crc kubenswrapper[4957]: I1128 20:49:22.848048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.655476 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.855893 4957 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ec06eef052d16b7e862b018369b878b33744413e06b6b23ca9e234996462a3a" exitCode=0 Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.855976 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ec06eef052d16b7e862b018369b878b33744413e06b6b23ca9e234996462a3a"} Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.856073 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.856147 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.856271 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858106 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858554 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858610 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:23 crc kubenswrapper[4957]: I1128 20:49:23.858628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863719 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"31d4321368fe1a765bd9e11beb16cbe7ee7f34712f072d357f9bc57d34f21bb8"} Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863751 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863794 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b59cde8df3588c502aeb2171375eeb39308a3e5cf96ea37195d9e5c2ebfd1610"} Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b2d04db8b356860882df15968cc81d6e742ebf1ae48c6606ff48d707779efa2"} Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c5176b1f1880ed9e141c941246b0d2adffd33324282b08ff31ff2b20dd8f056a"} Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863984 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f5e7b138191c2204a7ef3b36cec022f0f31b3ec2e2aceb061029fb6bf96dc051"} Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.863967 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.864571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.864596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.864604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.865072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.865096 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.865104 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.944688 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.944893 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.944940 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.945953 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.946013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:24 crc kubenswrapper[4957]: I1128 20:49:24.946038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.866887 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.868483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.868573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.868599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.908515 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.908660 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.908701 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.909823 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.909870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:25 crc kubenswrapper[4957]: I1128 20:49:25.909885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:26 crc kubenswrapper[4957]: I1128 20:49:26.358843 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:26 crc kubenswrapper[4957]: I1128 20:49:26.870795 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:26 crc kubenswrapper[4957]: I1128 20:49:26.872366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:26 crc kubenswrapper[4957]: I1128 20:49:26.872421 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:26 crc kubenswrapper[4957]: I1128 20:49:26.872440 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.359702 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.359973 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.361434 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.361462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.361470 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.367964 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.875345 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.876361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.876412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:27 crc kubenswrapper[4957]: I1128 20:49:27.876433 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:28 crc kubenswrapper[4957]: I1128 20:49:28.050699 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 20:49:28 crc kubenswrapper[4957]: I1128 20:49:28.050933 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:28 crc kubenswrapper[4957]: I1128 20:49:28.052531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:28 crc kubenswrapper[4957]: I1128 20:49:28.052600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:28 crc kubenswrapper[4957]: I1128 20:49:28.052618 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:29 crc kubenswrapper[4957]: I1128 20:49:29.763718 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 20:49:29 crc kubenswrapper[4957]: I1128 20:49:29.763930 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:29 crc kubenswrapper[4957]: I1128 20:49:29.765173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:29 crc kubenswrapper[4957]: I1128 20:49:29.765266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:29 crc kubenswrapper[4957]: I1128 20:49:29.765284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:30 crc kubenswrapper[4957]: I1128 20:49:30.295320 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:30 crc kubenswrapper[4957]: I1128 20:49:30.295468 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:30 crc kubenswrapper[4957]: I1128 20:49:30.296560 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:30 crc kubenswrapper[4957]: I1128 20:49:30.296594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:30 crc kubenswrapper[4957]: I1128 20:49:30.296606 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:30 crc kubenswrapper[4957]: E1128 20:49:30.860834 4957 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 20:49:31 crc kubenswrapper[4957]: I1128 20:49:31.854331 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:49:31 crc kubenswrapper[4957]: I1128 20:49:31.854533 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:31 crc kubenswrapper[4957]: I1128 20:49:31.855612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:31 crc kubenswrapper[4957]: I1128 20:49:31.855665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:31 crc kubenswrapper[4957]: I1128 20:49:31.855682 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.081304 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.081564 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.083788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.083883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.083913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.087502 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:32 crc kubenswrapper[4957]: W1128 20:49:32.292327 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.292425 4957 trace.go:236] Trace[1522192951]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 20:49:22.291) (total time: 10000ms): Nov 28 20:49:32 crc kubenswrapper[4957]: Trace[1522192951]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (20:49:32.292) Nov 28 20:49:32 crc kubenswrapper[4957]: Trace[1522192951]: [10.000489339s] [10.000489339s] END Nov 28 20:49:32 crc kubenswrapper[4957]: E1128 20:49:32.292445 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 20:49:32 crc kubenswrapper[4957]: E1128 20:49:32.363584 4957 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 28 20:49:32 crc kubenswrapper[4957]: W1128 20:49:32.372862 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.372993 4957 trace.go:236] Trace[91756870]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 20:49:22.372) (total time: 10000ms): Nov 28 20:49:32 crc kubenswrapper[4957]: Trace[91756870]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (20:49:32.372) Nov 28 20:49:32 crc kubenswrapper[4957]: Trace[91756870]: [10.000902307s] [10.000902307s] END Nov 28 20:49:32 crc kubenswrapper[4957]: E1128 20:49:32.373025 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.738423 4957 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 20:49:32 crc kubenswrapper[4957]: E1128 20:49:32.791167 4957 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.887578 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.888503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.888648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:32 crc kubenswrapper[4957]: I1128 20:49:32.888734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:33 crc kubenswrapper[4957]: W1128 20:49:33.456729 4957 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.457305 4957 trace.go:236] Trace[1063996224]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 20:49:23.455) (total time: 10002ms): Nov 28 20:49:33 crc kubenswrapper[4957]: Trace[1063996224]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:49:33.456) Nov 28 20:49:33 crc kubenswrapper[4957]: Trace[1063996224]: [10.002224002s] [10.002224002s] END Nov 28 20:49:33 crc kubenswrapper[4957]: E1128 20:49:33.457447 4957 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.738141 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.738253 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.741702 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.741767 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.964130 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.965058 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.965099 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.965112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:33 crc kubenswrapper[4957]: I1128 20:49:33.965136 4957 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.082272 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.082382 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.914916 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.915091 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.916054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.916097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.916107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:35 crc kubenswrapper[4957]: I1128 20:49:35.918597 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:36 crc kubenswrapper[4957]: I1128 20:49:36.896820 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:36 crc kubenswrapper[4957]: I1128 20:49:36.897640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:36 crc kubenswrapper[4957]: I1128 20:49:36.897686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:36 crc kubenswrapper[4957]: I1128 20:49:36.897697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:37 crc kubenswrapper[4957]: I1128 20:49:37.136261 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 20:49:37 crc kubenswrapper[4957]: I1128 20:49:37.151509 4957 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.077137 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.077332 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.078446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.078489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.078501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.092314 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 20:49:38 crc kubenswrapper[4957]: E1128 20:49:38.695909 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.700128 4957 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.701092 4957 trace.go:236] Trace[1570104619]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 20:49:24.682) (total time: 14018ms): Nov 28 20:49:38 crc kubenswrapper[4957]: Trace[1570104619]: ---"Objects listed" error: 14018ms (20:49:38.700) Nov 28 20:49:38 crc kubenswrapper[4957]: Trace[1570104619]: [14.018491859s] [14.018491859s] END Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.701128 4957 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.901336 4957 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.902429 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.902471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.902484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.937601 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40292->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.937617 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40296->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.937654 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40292->192.168.126.11:17697: read: connection reset by peer" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.937700 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40296->192.168.126.11:17697: read: connection reset by peer" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.937998 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.938020 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.938240 4957 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.938263 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.973309 4957 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.973638 4957 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 20:49:38 crc kubenswrapper[4957]: E1128 20:49:38.973664 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.976893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.976924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.976932 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.976947 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.976961 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:38Z","lastTransitionTime":"2025-11-28T20:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:38 crc kubenswrapper[4957]: E1128 20:49:38.992860 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.996269 4957 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.997097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.997127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.997141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.997164 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:38 crc kubenswrapper[4957]: I1128 20:49:38.997177 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:38Z","lastTransitionTime":"2025-11-28T20:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.006276 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.012661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.012704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.012714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.012729 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.012739 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:39Z","lastTransitionTime":"2025-11-28T20:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.027882 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.031173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.031290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.031300 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.031315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.031324 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:39Z","lastTransitionTime":"2025-11-28T20:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.041609 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.041717 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.041746 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.141943 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.242371 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.292116 4957 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.342916 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.443805 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.544750 4957 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.629707 4957 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.648024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.648376 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.648474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.648580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.648663 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:39Z","lastTransitionTime":"2025-11-28T20:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.652692 4957 csr.go:261] certificate signing request csr-lrvjb is approved, waiting to be issued Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.696013 4957 csr.go:257] certificate signing request csr-lrvjb is issued Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.740503 4957 apiserver.go:52] "Watching apiserver" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.743677 4957 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.743919 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.744332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.744415 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.744454 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.744487 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.744516 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.744528 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.744595 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.744688 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.744748 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.747901 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.747901 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.747946 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.747941 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.747952 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.748200 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.748406 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.748728 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.748849 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.751180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.751210 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.751243 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.751260 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.751271 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:39Z","lastTransitionTime":"2025-11-28T20:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.775527 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.786922 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.797239 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806801 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806849 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806839 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806887 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.806954 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.807159 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.807231 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.807264 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.807676 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.807809 4957 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.808450 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.812619 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.812640 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.819171 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.819204 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.819238 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.819310 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:40.319288554 +0000 UTC m=+19.787936463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.820323 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.820349 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.820362 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.820402 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:40.320390232 +0000 UTC m=+19.789038141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.821743 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.826344 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.829019 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.839703 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.845294 4957 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.851022 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.854567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.854631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.854648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.854670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.854684 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:39Z","lastTransitionTime":"2025-11-28T20:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.864641 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.876669 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.888138 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.900790 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.904899 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.906344 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085" exitCode=255 Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.906385 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085"} Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.907741 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.907871 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.907965 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908054 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908147 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908080 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908338 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908440 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908523 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908621 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908706 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908782 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908861 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908378 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908593 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908737 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908734 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908816 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908919 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.908915 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909010 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909143 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909307 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909431 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909474 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909449 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909605 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909689 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909698 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909761 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909787 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909810 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909829 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909847 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909864 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909903 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.909922 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910133 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910266 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910288 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910282 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910312 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910337 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910349 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910375 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910396 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910418 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910438 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910457 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910566 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910568 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910747 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910775 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910798 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910881 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910935 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.910956 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911245 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911305 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911164 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911407 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911406 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911569 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911709 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911735 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.911939 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912019 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912103 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912202 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912472 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912561 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912610 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912724 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912618 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912698 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912793 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912850 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912870 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912887 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912906 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912924 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912942 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912962 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912979 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912996 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913017 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913037 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913055 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913073 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913138 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913158 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913176 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913195 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913233 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913253 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913271 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913287 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913338 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913356 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913383 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913414 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913443 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913476 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913508 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913532 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913555 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913578 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913603 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913626 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913654 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913687 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913715 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913739 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913762 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913788 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913812 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913837 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913860 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913921 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913952 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913985 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914004 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914023 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914040 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914060 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914081 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914164 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914192 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914235 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914254 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914271 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914312 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914328 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914350 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914385 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914410 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914433 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914451 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914477 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912905 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916255 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912918 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913110 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913306 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913531 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913564 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913812 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913840 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913893 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.913918 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914014 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914114 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914193 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914265 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914379 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914409 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914460 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.914486 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:49:40.414470504 +0000 UTC m=+19.883118413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.914639 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.912484 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915403 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915466 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915552 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915566 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915568 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915787 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.915939 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916114 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916149 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916661 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916694 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916711 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916732 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916751 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916809 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916827 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916845 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.916880 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917010 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917037 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917059 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917078 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917114 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917163 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917180 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917199 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917236 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917268 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917290 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917309 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917325 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917368 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917395 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917419 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917442 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917461 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917558 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917579 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917602 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917627 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917649 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917668 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917750 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917771 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917789 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917810 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917829 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917847 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917865 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917935 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917956 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917977 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.917997 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918027 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918076 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918095 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918113 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918132 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918150 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918196 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918229 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918246 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918265 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918280 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918298 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918314 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918331 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918348 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918365 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918384 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918401 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918418 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918436 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918454 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918477 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918496 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918514 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918535 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918552 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918570 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918588 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918606 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918629 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918647 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918666 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918691 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918709 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918729 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918748 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918776 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918799 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918817 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918837 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918856 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918874 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918895 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918916 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918939 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.918960 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919007 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919034 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919074 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919126 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919296 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919438 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919452 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919463 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919477 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919491 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919504 4957 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919516 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919525 4957 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919537 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919548 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919558 4957 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919571 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919585 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919599 4957 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919613 4957 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919626 4957 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919637 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919650 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919661 4957 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919676 4957 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919691 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919702 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919713 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919723 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919735 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919749 4957 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919763 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919779 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919789 4957 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919800 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919814 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919824 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919836 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919850 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919864 4957 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919878 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.919893 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.920964 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.921241 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.921265 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.921584 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922011 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922281 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922560 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922680 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922571 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922814 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.922849 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.922913 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:40.42289502 +0000 UTC m=+19.891542929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922922 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.922968 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.923130 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.923251 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.923164 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.923288 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.923380 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.924439 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.924435 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.924525 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.924738 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.924827 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.924913 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.925279 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.925300 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.925599 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926004 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.925745 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926135 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926312 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926618 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926797 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.926802 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.927037 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.927325 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.927824 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.927963 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928302 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928397 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928510 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928531 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928541 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928849 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928867 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928868 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929179 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929445 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929536 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929852 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929884 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929924 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929984 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929982 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.930097 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.930091 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.930648 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934690 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.930498 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: E1128 20:49:39.935167 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:40.435136024 +0000 UTC m=+19.903783933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.930568 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.930627 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.930973 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.931031 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.931185 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.931288 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.931625 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.928956 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932154 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932204 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932286 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932417 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932590 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929311 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.929313 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932760 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932824 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932919 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.932981 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933057 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933182 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933227 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933268 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933288 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933531 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933706 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.933884 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934010 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934171 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934360 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934381 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934407 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934425 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934523 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.934604 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.938053 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.939235 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.941883 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.941918 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942065 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942091 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942105 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942176 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942238 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942291 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942616 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.942730 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.945577 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.946456 4957 scope.go:117] "RemoveContainer" containerID="bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.949292 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.949674 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.954024 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.954310 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.962725 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.963610 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.965839 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.966593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.966634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.966644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.966661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.966671 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:39Z","lastTransitionTime":"2025-11-28T20:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.967237 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.968764 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.975487 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.975619 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.976993 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.977618 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.978195 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.978464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.978888 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.979024 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.979444 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.980683 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.981367 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.981979 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.981971 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.982049 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.993784 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:49:39 crc kubenswrapper[4957]: I1128 20:49:39.996673 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.003797 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.003830 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.005579 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.018134 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.020832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021011 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021080 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021133 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021185 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021261 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021314 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021374 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021427 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021479 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021535 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021589 4957 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021646 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021699 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021749 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021800 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021848 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021902 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.021960 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022012 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022061 4957 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022123 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022197 4957 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022292 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022351 4957 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022402 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022457 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022517 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022572 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022630 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022686 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022738 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022787 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022846 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022931 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.022999 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023051 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023100 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023160 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023230 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023299 4957 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023355 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023406 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023456 4957 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023508 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023567 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023659 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023719 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023770 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023824 4957 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023879 4957 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023930 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.023988 4957 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024040 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024095 4957 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024162 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024243 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024310 4957 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024346 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024366 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024447 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024458 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024469 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024479 4957 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024491 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024504 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024520 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024531 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024540 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024550 4957 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024559 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024569 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024579 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024589 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024598 4957 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024607 4957 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024617 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024628 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024637 4957 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024646 4957 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024655 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024665 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024673 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024682 4957 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024691 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024701 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024710 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024720 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024730 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024741 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024750 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024759 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024768 4957 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024777 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024818 4957 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024828 4957 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024837 4957 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024848 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024856 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024865 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024874 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024883 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024892 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024901 4957 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024910 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024919 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024928 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024937 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024946 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024955 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024964 4957 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024975 4957 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024984 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.024993 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025001 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025010 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025020 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025028 4957 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025038 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025047 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025056 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025065 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025076 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025085 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025095 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025104 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025114 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025123 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025132 4957 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025118 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025141 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025178 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025190 4957 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025201 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025245 4957 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025255 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025265 4957 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025277 4957 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025289 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025299 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025308 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025317 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025879 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025897 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025911 4957 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025924 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025932 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025942 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025951 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025960 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025969 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025977 4957 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025985 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.025993 4957 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.026064 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.026078 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.026090 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.026100 4957 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.026110 4957 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.026120 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.058592 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.062332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.071273 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.073182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.073231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.073241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.073255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.073265 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.097140 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wd5v9"] Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.097597 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.100965 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.101198 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.101450 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.105403 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3358b1817f1ab5c054c3890a2f10d75fc3109ae6e907c1180e135e5fedfb7b0f WatchSource:0}: Error finding container 3358b1817f1ab5c054c3890a2f10d75fc3109ae6e907c1180e135e5fedfb7b0f: Status 404 returned error can't find the container with id 3358b1817f1ab5c054c3890a2f10d75fc3109ae6e907c1180e135e5fedfb7b0f Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.109600 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.122087 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.139057 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.151538 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.171716 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.176671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.176702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.176712 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.176725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.176733 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.183742 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.198488 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.213856 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.229327 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7mz\" (UniqueName: \"kubernetes.io/projected/40dd1333-011f-44fd-b0ce-2f289af3a4d4-kube-api-access-kv7mz\") pod \"node-resolver-wd5v9\" (UID: \"40dd1333-011f-44fd-b0ce-2f289af3a4d4\") " pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.229395 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40dd1333-011f-44fd-b0ce-2f289af3a4d4-hosts-file\") pod \"node-resolver-wd5v9\" (UID: \"40dd1333-011f-44fd-b0ce-2f289af3a4d4\") " pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.282007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.282034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.282042 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.282055 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.282064 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.330110 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv7mz\" (UniqueName: \"kubernetes.io/projected/40dd1333-011f-44fd-b0ce-2f289af3a4d4-kube-api-access-kv7mz\") pod \"node-resolver-wd5v9\" (UID: \"40dd1333-011f-44fd-b0ce-2f289af3a4d4\") " pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.330173 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.330199 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.330258 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40dd1333-011f-44fd-b0ce-2f289af3a4d4-hosts-file\") pod \"node-resolver-wd5v9\" (UID: \"40dd1333-011f-44fd-b0ce-2f289af3a4d4\") " pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.330342 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40dd1333-011f-44fd-b0ce-2f289af3a4d4-hosts-file\") pod \"node-resolver-wd5v9\" (UID: \"40dd1333-011f-44fd-b0ce-2f289af3a4d4\") " pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330425 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330469 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330503 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330514 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330530 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330558 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330573 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:41.33055196 +0000 UTC m=+20.799199929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.330603 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:41.330583001 +0000 UTC m=+20.799231000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.349810 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv7mz\" (UniqueName: \"kubernetes.io/projected/40dd1333-011f-44fd-b0ce-2f289af3a4d4-kube-api-access-kv7mz\") pod \"node-resolver-wd5v9\" (UID: \"40dd1333-011f-44fd-b0ce-2f289af3a4d4\") " pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.384775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.384811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.384819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.384833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.384843 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.426203 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wd5v9" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.431185 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.431304 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.431399 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.431455 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:41.431441386 +0000 UTC m=+20.900089295 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.431498 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:49:41.431474276 +0000 UTC m=+20.900122175 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.435786 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40dd1333_011f_44fd_b0ce_2f289af3a4d4.slice/crio-55968a216e08886bb1039a78f01d68e683190ced3ac69f33dc4fb6c2a08aa181 WatchSource:0}: Error finding container 55968a216e08886bb1039a78f01d68e683190ced3ac69f33dc4fb6c2a08aa181: Status 404 returned error can't find the container with id 55968a216e08886bb1039a78f01d68e683190ced3ac69f33dc4fb6c2a08aa181 Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.490584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.490617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.490627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.490641 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.490650 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.495327 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4sml5"] Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.495616 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.495791 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hq5x2"] Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.496036 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dbjsr"] Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.496259 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.496546 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.504705 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.504869 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.504891 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.504879 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.504875 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.505156 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.505167 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.505181 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.505877 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qhqwg"] Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.506602 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.507603 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.507650 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.507656 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.507755 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528199 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528474 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528186 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528734 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528750 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528859 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.528964 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.529058 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.533345 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.533445 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.533493 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:41.533478181 +0000 UTC m=+21.002126090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.560881 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.574754 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.589342 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.593687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.593751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.593763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.593784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.593795 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.600563 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.614583 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.628100 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634714 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634754 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-conf-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634774 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-etc-kubernetes\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634795 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-slash\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634815 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/985dfaa6-dc28-434b-9235-b6338e8f331b-ovn-node-metrics-cert\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634845 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-script-lib\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-os-release\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634882 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-hostroot\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634903 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-mcd-auth-proxy-config\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634957 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-k8s-cni-cncf-io\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.634979 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5dp\" (UniqueName: \"kubernetes.io/projected/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-kube-api-access-cq5dp\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635157 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-netns\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635265 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-etc-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635312 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-cnibin\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635361 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-os-release\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635394 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-kubelet\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635415 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfpm\" (UniqueName: \"kubernetes.io/projected/cb1978e2-0fff-4af0-b1d4-e21d677ae377-kube-api-access-vpfpm\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635435 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-system-cni-dir\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635455 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-systemd\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635475 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-env-overrides\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqrd\" (UniqueName: \"kubernetes.io/projected/b16fffbf-545b-489a-a0de-da602df9d272-kube-api-access-ncqrd\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635549 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-node-log\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635572 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-systemd-units\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635589 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-ovn\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635606 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-bin\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635674 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635723 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-config\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635751 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-socket-dir-parent\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635777 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-daemon-config\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635820 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b16fffbf-545b-489a-a0de-da602df9d272-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635854 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-cnibin\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635873 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635891 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-log-socket\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635911 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-ovn-kubernetes\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635948 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-netd\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.635985 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-cni-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636010 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-cni-multus\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636032 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-proxy-tls\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636048 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-rootfs\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b16fffbf-545b-489a-a0de-da602df9d272-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636095 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-netns\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-kubelet\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636151 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-var-lib-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636183 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqww\" (UniqueName: \"kubernetes.io/projected/985dfaa6-dc28-434b-9235-b6338e8f331b-kube-api-access-hgqww\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636223 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-system-cni-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636249 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cb1978e2-0fff-4af0-b1d4-e21d677ae377-cni-binary-copy\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636267 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-cni-bin\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.636288 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-multus-certs\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.640847 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.651406 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.665343 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.666363 4957 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666583 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666632 4957 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666660 4957 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666687 4957 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666712 4957 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666740 4957 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666763 4957 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666786 4957 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666809 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666831 4957 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666856 4957 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.666888 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667259 4957 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667280 4957 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667310 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667314 4957 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667329 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667347 4957 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667369 4957 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667377 4957 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667403 4957 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667428 4957 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667450 4957 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667470 4957 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667494 4957 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667515 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.667556 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/events\": read tcp 38.102.83.111:60340->38.102.83.111:6443: use of closed network connection" event="&Event{ObjectMeta:{node-resolver-wd5v9.187c46c86d303096 openshift-dns 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-dns,Name:node-resolver-wd5v9,UID:40dd1333-011f-44fd-b0ce-2f289af3a4d4,APIVersion:v1,ResourceVersion:26508,FieldPath:spec.containers{dns-node-resolver},},Reason:Created,Message:Created container dns-node-resolver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 20:49:40.661670038 +0000 UTC m=+20.130317947,LastTimestamp:2025-11-28 20:49:40.661670038 +0000 UTC m=+20.130317947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667664 4957 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667693 4957 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667718 4957 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667883 4957 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.667912 4957 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.696818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.696857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.696866 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.696883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.696895 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.698289 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-28 20:44:39 +0000 UTC, rotation deadline is 2026-10-05 13:10:34.08821727 +0000 UTC Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.698364 4957 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7456h20m53.389856522s for next certificate rotation Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737038 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-rootfs\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737085 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-netns\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-kubelet\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737121 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b16fffbf-545b-489a-a0de-da602df9d272-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737140 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-var-lib-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgqww\" (UniqueName: \"kubernetes.io/projected/985dfaa6-dc28-434b-9235-b6338e8f331b-kube-api-access-hgqww\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737178 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-system-cni-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737195 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cb1978e2-0fff-4af0-b1d4-e21d677ae377-cni-binary-copy\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737231 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-cni-bin\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-rootfs\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737252 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-netns\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-multus-certs\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737294 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-var-lib-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737295 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-cni-bin\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737320 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737305 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-kubelet\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737275 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-multus-certs\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737375 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-system-cni-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-conf-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737441 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-conf-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737472 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-etc-kubernetes\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737513 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-etc-kubernetes\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737516 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-slash\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737544 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-slash\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737560 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/985dfaa6-dc28-434b-9235-b6338e8f331b-ovn-node-metrics-cert\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737602 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-script-lib\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737621 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-os-release\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737729 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737797 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-hostroot\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.737871 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-os-release\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.738015 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cb1978e2-0fff-4af0-b1d4-e21d677ae377-cni-binary-copy\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.738163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b16fffbf-545b-489a-a0de-da602df9d272-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.738469 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-script-lib\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.738522 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-hostroot\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.738562 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-mcd-auth-proxy-config\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739152 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-mcd-auth-proxy-config\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739257 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-k8s-cni-cncf-io\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739289 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-run-k8s-cni-cncf-io\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739320 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5dp\" (UniqueName: \"kubernetes.io/projected/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-kube-api-access-cq5dp\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739498 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-netns\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739357 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-netns\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739593 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-etc-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739667 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-etc-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739616 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-cnibin\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739721 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-os-release\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-kubelet\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739764 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-cnibin\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739780 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpfpm\" (UniqueName: \"kubernetes.io/projected/cb1978e2-0fff-4af0-b1d4-e21d677ae377-kube-api-access-vpfpm\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739818 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-os-release\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739824 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-kubelet\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739862 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-system-cni-dir\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739884 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-systemd\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739919 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-system-cni-dir\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739920 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-env-overrides\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739978 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqrd\" (UniqueName: \"kubernetes.io/projected/b16fffbf-545b-489a-a0de-da602df9d272-kube-api-access-ncqrd\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.739988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-systemd\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740001 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-node-log\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740024 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-systemd-units\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-ovn\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740082 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-config\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740099 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-socket-dir-parent\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740098 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-ovn\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740125 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-node-log\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740117 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-daemon-config\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-systemd-units\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740176 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-socket-dir-parent\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740187 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-bin\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740230 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-bin\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740234 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b16fffbf-545b-489a-a0de-da602df9d272-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740130 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740267 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-cnibin\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740343 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-log-socket\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740368 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-ovn-kubernetes\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740394 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-cni-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740413 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-cni-multus\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740432 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-proxy-tls\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740449 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-netd\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740500 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-netd\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740531 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b16fffbf-545b-489a-a0de-da602df9d272-cnibin\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740555 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-openvswitch\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740577 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-log-socket\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-ovn-kubernetes\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-cni-dir\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740667 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cb1978e2-0fff-4af0-b1d4-e21d677ae377-host-var-lib-cni-multus\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740811 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cb1978e2-0fff-4af0-b1d4-e21d677ae377-multus-daemon-config\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740873 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b16fffbf-545b-489a-a0de-da602df9d272-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.740897 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-config\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.741249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-env-overrides\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.742046 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/985dfaa6-dc28-434b-9235-b6338e8f331b-ovn-node-metrics-cert\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.743132 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-proxy-tls\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.756649 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5dp\" (UniqueName: \"kubernetes.io/projected/8d41c2ca-d1ca-46b0-be19-6e4693f0b827-kube-api-access-cq5dp\") pod \"machine-config-daemon-hq5x2\" (UID: \"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\") " pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.757704 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpfpm\" (UniqueName: \"kubernetes.io/projected/cb1978e2-0fff-4af0-b1d4-e21d677ae377-kube-api-access-vpfpm\") pod \"multus-4sml5\" (UID: \"cb1978e2-0fff-4af0-b1d4-e21d677ae377\") " pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.758764 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqrd\" (UniqueName: \"kubernetes.io/projected/b16fffbf-545b-489a-a0de-da602df9d272-kube-api-access-ncqrd\") pod \"multus-additional-cni-plugins-dbjsr\" (UID: \"b16fffbf-545b-489a-a0de-da602df9d272\") " pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.760434 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgqww\" (UniqueName: \"kubernetes.io/projected/985dfaa6-dc28-434b-9235-b6338e8f331b-kube-api-access-hgqww\") pod \"ovnkube-node-qhqwg\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.798635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.798669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.798677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.798689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.798698 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.810189 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4sml5" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.815131 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:40 crc kubenswrapper[4957]: E1128 20:49:40.815298 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.818393 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.818902 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.819774 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.820133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.820428 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.820993 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: W1128 20:49:40.820987 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb1978e2_0fff_4af0_b1d4_e21d677ae377.slice/crio-5c16cb66ad79538f27258f938d19f140e726c51c84c83a373cbeed023ee62c0d WatchSource:0}: Error finding container 5c16cb66ad79538f27258f938d19f140e726c51c84c83a373cbeed023ee62c0d: Status 404 returned error can't find the container with id 5c16cb66ad79538f27258f938d19f140e726c51c84c83a373cbeed023ee62c0d Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.821493 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.822075 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.823802 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.824429 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.825409 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.825962 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.827001 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.829741 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.830402 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.831374 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.831880 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.832873 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.833067 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.833265 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.833830 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.834802 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.835257 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.836244 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.836711 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.837773 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.838194 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.839205 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.839855 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.840783 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.841174 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.841712 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.842160 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.843027 4957 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.843134 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.844738 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.845744 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.846132 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.847622 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.848661 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.849181 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.850135 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.850906 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.851445 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.852441 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.853406 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.864006 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.864709 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.865703 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.866424 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.870074 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.871940 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.876374 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.877102 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.877856 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.880058 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.880714 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.903579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.903999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.904016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.904040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.904053 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:40Z","lastTransitionTime":"2025-11-28T20:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.912074 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerStarted","Data":"5c16cb66ad79538f27258f938d19f140e726c51c84c83a373cbeed023ee62c0d"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.919694 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wd5v9" event={"ID":"40dd1333-011f-44fd-b0ce-2f289af3a4d4","Type":"ContainerStarted","Data":"f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.919747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wd5v9" event={"ID":"40dd1333-011f-44fd-b0ce-2f289af3a4d4","Type":"ContainerStarted","Data":"55968a216e08886bb1039a78f01d68e683190ced3ac69f33dc4fb6c2a08aa181"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.923705 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.929659 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.929854 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.933561 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3358b1817f1ab5c054c3890a2f10d75fc3109ae6e907c1180e135e5fedfb7b0f"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.935548 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.935593 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3309593c278d6773c459460fe948b71a7a0a9a7b3b39a55b0eb2ee8e31133f46"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.937391 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"de82f165a41e3868882aa968ae2a413dfdf1f586fa008419dfe04b2854f27944"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.945894 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"7f1edccc4f6b23fbc1a2a54144e4b70d8ef252831e4708dd3fbfdffa9887ace2"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.948326 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.948350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.948360 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3fd8b5ce60280a131e1f41c77be7a6e4c168523c1d1a22dd521fedcb56cb20b"} Nov 28 20:49:40 crc kubenswrapper[4957]: I1128 20:49:40.949368 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerStarted","Data":"71d0077bc561a9631aedfa7013e5f991f31d324d61e5ce1801dbb250008e6211"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.007943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.008020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.008059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.008083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.008096 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.111183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.111240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.111253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.111268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.111287 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.213578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.213619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.213648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.213666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.213677 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.315733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.315776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.315788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.315804 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.315815 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.345447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.345517 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345673 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345709 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345723 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345782 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:43.345760715 +0000 UTC m=+22.814408624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345682 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345809 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345817 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.345837 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:43.345831706 +0000 UTC m=+22.814479615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.417940 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.418000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.418009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.418022 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.418030 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.446686 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.446816 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.446843 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:49:43.446822325 +0000 UTC m=+22.915470234 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.446923 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.446979 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:43.446966339 +0000 UTC m=+22.915614248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.470544 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.500308 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.507256 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.522048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.522439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.522449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.522464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.522473 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.533374 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.534870 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.537785 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.544814 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.547620 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.547780 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.547924 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:43.547901507 +0000 UTC m=+23.016549476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.575543 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.590751 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.600095 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.625044 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.625089 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.625126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.625146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.625158 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.680962 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.681348 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.692966 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.709280 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.719721 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.726670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.726717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.726730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.726748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.726760 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.742829 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.754884 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.756437 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.769496 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.773015 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.781692 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.789571 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.790618 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.798020 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.806764 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.812737 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.812829 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.812844 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:41 crc kubenswrapper[4957]: E1128 20:49:41.813035 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.817986 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.829067 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.829106 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.829114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.829129 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.829139 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.832515 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.843160 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.854096 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.865374 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.876172 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.879191 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.885675 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.894165 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.895596 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.900809 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.904571 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.906376 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.917088 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.918593 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.931686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.931726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.931736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.931753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.931764 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:41Z","lastTransitionTime":"2025-11-28T20:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.935114 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.948178 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.950512 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.953099 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352" exitCode=0 Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.953153 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.954689 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerStarted","Data":"1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.956736 4957 generic.go:334] "Generic (PLEG): container finished" podID="b16fffbf-545b-489a-a0de-da602df9d272" containerID="7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10" exitCode=0 Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.956811 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerDied","Data":"7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.958700 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.960544 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.960579 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb"} Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.963562 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.976901 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.978633 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 20:49:41 crc kubenswrapper[4957]: I1128 20:49:41.990568 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.002510 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.017458 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.028110 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.031075 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.037366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.037391 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.037400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.037414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.037423 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.038558 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.044378 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.047286 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.057948 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.069305 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.074602 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.081446 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.082272 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.085705 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.089260 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.089791 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.092416 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.102273 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.116323 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.131769 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.134682 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.138819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.138847 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.138856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.138878 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.138895 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.146990 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.159359 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.172064 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.211062 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.223383 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.240456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.240489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.240497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.240511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.240552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.273829 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.312640 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.342630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.342664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.342672 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.342687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.342696 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.350746 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.377335 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8qkjt"] Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.377675 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.391081 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.404149 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.423739 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.443101 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.444484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.444561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.444584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.444605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.444619 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.463853 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.516309 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.547193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.547237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.547248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.547261 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.547270 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.554201 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.557604 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvfd\" (UniqueName: \"kubernetes.io/projected/16385382-457d-4c77-a56f-30917f1c3f66-kube-api-access-klvfd\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.557777 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16385382-457d-4c77-a56f-30917f1c3f66-host\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.557842 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16385382-457d-4c77-a56f-30917f1c3f66-serviceca\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.589985 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.637745 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.649973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.650016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.650025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.650040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.650053 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.658638 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvfd\" (UniqueName: \"kubernetes.io/projected/16385382-457d-4c77-a56f-30917f1c3f66-kube-api-access-klvfd\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.658698 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16385382-457d-4c77-a56f-30917f1c3f66-host\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.658718 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16385382-457d-4c77-a56f-30917f1c3f66-serviceca\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.658869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16385382-457d-4c77-a56f-30917f1c3f66-host\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.659490 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16385382-457d-4c77-a56f-30917f1c3f66-serviceca\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.670493 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.701538 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvfd\" (UniqueName: \"kubernetes.io/projected/16385382-457d-4c77-a56f-30917f1c3f66-kube-api-access-klvfd\") pod \"node-ca-8qkjt\" (UID: \"16385382-457d-4c77-a56f-30917f1c3f66\") " pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.708503 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8qkjt" Nov 28 20:49:42 crc kubenswrapper[4957]: W1128 20:49:42.721202 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16385382_457d_4c77_a56f_30917f1c3f66.slice/crio-9520957331f242935f67be2b985113634eb07051f86d4a99cb0d0e9149eb6471 WatchSource:0}: Error finding container 9520957331f242935f67be2b985113634eb07051f86d4a99cb0d0e9149eb6471: Status 404 returned error can't find the container with id 9520957331f242935f67be2b985113634eb07051f86d4a99cb0d0e9149eb6471 Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.731519 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.753397 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.753428 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.753437 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.753451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.753460 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.771489 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.813516 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.813603 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: E1128 20:49:42.813643 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.857182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.857269 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.857283 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.857310 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.857324 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.861368 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.889344 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.940236 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.959984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.960038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.960049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.960064 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.960075 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:42Z","lastTransitionTime":"2025-11-28T20:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.963444 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8qkjt" event={"ID":"16385382-457d-4c77-a56f-30917f1c3f66","Type":"ContainerStarted","Data":"9520957331f242935f67be2b985113634eb07051f86d4a99cb0d0e9149eb6471"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.966726 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.966767 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.966777 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.966788 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.966796 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.966806 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.968229 4957 generic.go:334] "Generic (PLEG): container finished" podID="b16fffbf-545b-489a-a0de-da602df9d272" containerID="56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8" exitCode=0 Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.968291 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerDied","Data":"56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.970802 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3"} Nov 28 20:49:42 crc kubenswrapper[4957]: I1128 20:49:42.977285 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.015397 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.051656 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.063455 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.063499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.063513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.063531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.063544 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.092130 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.131655 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.165916 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.165951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.165961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.165976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.165987 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.172251 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.209089 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.251007 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.268021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.268048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.268058 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.268073 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.268102 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.290703 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.331941 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.364607 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.364687 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364833 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364853 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364866 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364888 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364934 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364953 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.364912 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:47.364897616 +0000 UTC m=+26.833545525 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.365045 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:47.365005329 +0000 UTC m=+26.833653268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.370714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.370749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.370761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.370873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.370888 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.374081 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.412713 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.452585 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.465527 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.465668 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:49:47.465650119 +0000 UTC m=+26.934298028 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.465704 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.465867 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.465923 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:47.465910236 +0000 UTC m=+26.934558145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.472808 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.472829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.472840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.472855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.472865 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.492850 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.531422 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.566257 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.566373 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.566434 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:47.566418142 +0000 UTC m=+27.035066061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.576146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.576181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.576192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.576237 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.576254 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.576077 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.612297 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.657892 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.677855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.677900 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.677909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.677923 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.677932 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.692282 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.730354 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.768890 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.780990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.781017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.781026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.781040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.781052 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.810941 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.811992 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.812161 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.812273 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:43 crc kubenswrapper[4957]: E1128 20:49:43.812448 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.883071 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.883108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.883117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.883132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.883144 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.977538 4957 generic.go:334] "Generic (PLEG): container finished" podID="b16fffbf-545b-489a-a0de-da602df9d272" containerID="d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0" exitCode=0 Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.977763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerDied","Data":"d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.978881 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8qkjt" event={"ID":"16385382-457d-4c77-a56f-30917f1c3f66","Type":"ContainerStarted","Data":"0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.985761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.985825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.985841 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.985867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.985882 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:43Z","lastTransitionTime":"2025-11-28T20:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:43 crc kubenswrapper[4957]: I1128 20:49:43.995849 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:43Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.011918 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.024307 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.039807 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.053521 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.066615 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.088907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.088955 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.088967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.088983 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.088994 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.091842 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.135464 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.168978 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.191249 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.191293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.191305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.191323 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.191336 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.212952 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.250111 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.293738 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.293776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.293787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.293802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.293812 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.300653 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.331654 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.371195 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.396277 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.396329 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.396342 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.396361 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.396377 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.420193 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.451558 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.490959 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.498678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.498716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.498725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.498739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.498749 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.531975 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.573694 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.601078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.601125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.601137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.601156 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.601172 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.615292 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.659992 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.698514 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.704062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.704114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.704124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.704144 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.704157 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.734061 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.773500 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.806467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.806505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.806514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.806526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.806535 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.809347 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.812637 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:44 crc kubenswrapper[4957]: E1128 20:49:44.812858 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.854180 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.893127 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.908661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.908699 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.908709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.908725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.908737 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:44Z","lastTransitionTime":"2025-11-28T20:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.932486 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:44Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.989160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568"} Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.991938 4957 generic.go:334] "Generic (PLEG): container finished" podID="b16fffbf-545b-489a-a0de-da602df9d272" containerID="f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2" exitCode=0 Nov 28 20:49:44 crc kubenswrapper[4957]: I1128 20:49:44.992065 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerDied","Data":"f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.011037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.011083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.011093 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.011110 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.011123 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.014066 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.032581 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.069178 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.094624 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.113480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.113527 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.113536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.113551 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.113562 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.135670 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.172571 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.241729 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.241798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.241819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.241890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.241913 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.251664 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.266623 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.299460 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.344633 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.346602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.346692 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.346716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.346792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.346813 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.372165 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.411232 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.449080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.449123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.449135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.449151 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.449164 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.457008 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.492628 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:45Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.551426 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.551469 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.551481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.551497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.551509 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.654828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.654864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.654876 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.654893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.654904 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.757743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.757783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.757796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.757814 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.757827 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.812662 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.812699 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:45 crc kubenswrapper[4957]: E1128 20:49:45.812804 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:45 crc kubenswrapper[4957]: E1128 20:49:45.812896 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.859651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.859685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.859693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.859706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.859716 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.962135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.962172 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.962183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.962198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.962225 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:45Z","lastTransitionTime":"2025-11-28T20:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.998300 4957 generic.go:334] "Generic (PLEG): container finished" podID="b16fffbf-545b-489a-a0de-da602df9d272" containerID="6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d" exitCode=0 Nov 28 20:49:45 crc kubenswrapper[4957]: I1128 20:49:45.998346 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerDied","Data":"6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.014474 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.026638 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.041988 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.057832 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.064978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.065014 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.065023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.065038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.065052 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.068629 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.099382 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.116344 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.132843 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.148016 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.165347 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.167149 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.167197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.167253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.167283 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.167306 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.178273 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.192488 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.206837 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.225232 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:46Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.270406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.270446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.270459 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.270476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.270488 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.372700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.372735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.372743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.372755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.372767 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.475546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.475581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.475591 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.475606 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.475614 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.578096 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.578191 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.578262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.578290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.578308 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.680559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.680637 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.680658 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.680685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.680704 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.782999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.783040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.783053 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.783069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.783081 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.812728 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:46 crc kubenswrapper[4957]: E1128 20:49:46.812858 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.885844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.885892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.885904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.885922 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.885935 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.988641 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.988690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.988704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.988726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:46 crc kubenswrapper[4957]: I1128 20:49:46.988741 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:46Z","lastTransitionTime":"2025-11-28T20:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.004796 4957 generic.go:334] "Generic (PLEG): container finished" podID="b16fffbf-545b-489a-a0de-da602df9d272" containerID="e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa" exitCode=0 Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.004840 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerDied","Data":"e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.025201 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.042699 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.055018 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.068711 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.082418 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.090869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.090908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.090920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.090936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.090947 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.095916 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.113534 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.126941 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.141561 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.153504 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.167230 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.193770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.193806 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.193818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.193833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.193844 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.194256 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.208359 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.221101 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:47Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.296937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.297343 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.297354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.297371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.297386 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.399291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.399350 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.399364 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.399383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.399394 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.402975 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.403044 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403195 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403232 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403244 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403238 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403280 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403292 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:55.403276921 +0000 UTC m=+34.871924830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403297 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.403383 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:55.403355943 +0000 UTC m=+34.872003852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.501419 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.501455 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.501464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.501477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.501485 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.503906 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.504009 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.504070 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:49:55.504055753 +0000 UTC m=+34.972703662 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.504149 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.504188 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:55.504181277 +0000 UTC m=+34.972829186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.604731 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.604816 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.604866 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.604908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.604928 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:49:55.604910689 +0000 UTC m=+35.073558598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.604931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.604972 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.604994 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.707113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.707151 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.707160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.707173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.707182 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.810974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.811056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.811080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.811116 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.811140 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.812297 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.812297 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.812480 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:47 crc kubenswrapper[4957]: E1128 20:49:47.812593 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.915163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.915310 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.915334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.915367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:47 crc kubenswrapper[4957]: I1128 20:49:47.915389 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:47Z","lastTransitionTime":"2025-11-28T20:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.019785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.019872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.019895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.019934 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.019962 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.020136 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.020645 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.020688 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.030024 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" event={"ID":"b16fffbf-545b-489a-a0de-da602df9d272","Type":"ContainerStarted","Data":"da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.044764 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.054174 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.057909 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.082060 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.104616 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.122011 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.122051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.122060 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.122074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.122084 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.130257 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.144324 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.157806 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.170890 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.187877 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.207509 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.221024 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.224765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.224807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.224819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.224836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.224846 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.233541 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.247011 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.265834 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.276930 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.287348 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.306137 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.320268 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.327548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.327592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.327635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.327655 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.327667 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.333107 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.344763 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.355278 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.365592 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.379050 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.392510 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.404114 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.414818 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.428942 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.430063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.430150 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.430172 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.430196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.430229 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.439862 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:48Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.532529 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.532617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.532635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.532663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.532683 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.640045 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.640170 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.640197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.640307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.640332 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.742785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.742819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.742830 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.742843 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.742852 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.812516 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:48 crc kubenswrapper[4957]: E1128 20:49:48.812633 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.845101 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.845137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.845149 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.845168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.845180 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.947383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.947428 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.947437 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.947453 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:48 crc kubenswrapper[4957]: I1128 20:49:48.947462 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:48Z","lastTransitionTime":"2025-11-28T20:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.033048 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.049966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.050034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.050054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.050083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.050106 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.052578 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.069337 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.080677 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.096081 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.110899 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.121466 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.140711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.140760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.140772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.140790 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.140802 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.145225 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.155396 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.157521 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.158307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.158338 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.158347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.158366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.158377 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.168726 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.168790 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.172632 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.172687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.172701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.172727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.172743 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.185003 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.186815 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.190497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.190542 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.190555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.190575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.190589 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.201892 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.202572 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.206722 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.206770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.206786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.206806 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.206829 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.215340 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.226812 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.235667 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.235879 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.237852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.237899 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.237911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.237996 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.238012 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.240611 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.251899 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:49Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.340353 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.340398 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.340407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.340423 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.340435 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.443188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.443243 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.443263 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.443279 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.443287 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.545709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.545744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.545752 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.545768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.545777 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.648222 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.648262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.648272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.648292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.648303 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.750405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.750498 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.750518 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.750546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.750564 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.812989 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.813058 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.813187 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:49 crc kubenswrapper[4957]: E1128 20:49:49.813346 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.853667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.853724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.853742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.853765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.853782 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.956158 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.956194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.956205 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.956235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:49 crc kubenswrapper[4957]: I1128 20:49:49.956246 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:49Z","lastTransitionTime":"2025-11-28T20:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.037106 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/0.log" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.041792 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2" exitCode=1 Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.041857 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.042646 4957 scope.go:117] "RemoveContainer" containerID="428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.055281 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.058682 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.058727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.058739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.058757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.058771 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.070295 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.085175 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.097888 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.110292 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.126344 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.139640 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.150964 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.161158 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.161228 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.161245 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.161266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.161280 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.169693 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.185434 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.201910 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.230357 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"87315 6254 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187604 6254 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.187832 6254 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187888 6254 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187930 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 20:49:49.187934 6254 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187956 6254 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.188408 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 20:49:49.189004 6254 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.248507 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.263577 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.263619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.263631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.263649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.263664 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.268053 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.366627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.366664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.366673 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.366687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.366696 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.469137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.469170 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.469181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.469198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.469225 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.571502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.571588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.571601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.571618 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.571630 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.673950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.673986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.673994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.674009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.674019 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.776800 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.776860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.776873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.776892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.776906 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.812135 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:50 crc kubenswrapper[4957]: E1128 20:49:50.812336 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.828042 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.847105 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.860679 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.875926 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.878912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.878952 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.878964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.878982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.878993 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.891362 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.906336 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.921318 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.937101 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.950793 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.968137 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"87315 6254 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187604 6254 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.187832 6254 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187888 6254 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187930 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 20:49:49.187934 6254 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187956 6254 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.188408 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 20:49:49.189004 6254 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.978353 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.980940 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.980997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.981010 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.981030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.981042 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:50Z","lastTransitionTime":"2025-11-28T20:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:50 crc kubenswrapper[4957]: I1128 20:49:50.989095 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.004682 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.016326 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.047783 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/0.log" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.050872 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.051256 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.066430 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.077176 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.082931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.082976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.082987 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.083001 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.083010 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.095574 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"87315 6254 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187604 6254 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.187832 6254 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187888 6254 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187930 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 20:49:49.187934 6254 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187956 6254 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.188408 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 20:49:49.189004 6254 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.107937 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.117924 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.128023 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.139243 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.147684 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.159079 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.169414 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.178462 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.186272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.186406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.186442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.186499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.186529 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.190036 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.203952 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.213032 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.289255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.289317 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.289335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.289360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.289376 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.391693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.391737 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.391747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.391763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.391774 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.495175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.495248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.495270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.495299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.495321 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.597915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.597970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.597992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.598023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.598045 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.700977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.701028 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.701046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.701070 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.701087 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.804155 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.804205 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.804238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.804256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.804271 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.812682 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.812729 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:51 crc kubenswrapper[4957]: E1128 20:49:51.812802 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:51 crc kubenswrapper[4957]: E1128 20:49:51.812928 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.907102 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.907146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.907161 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.907180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:51 crc kubenswrapper[4957]: I1128 20:49:51.907191 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:51Z","lastTransitionTime":"2025-11-28T20:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.010562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.010808 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.010867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.010944 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.011009 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.058026 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/1.log" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.059124 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/0.log" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.064059 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94" exitCode=1 Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.064128 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.064258 4957 scope.go:117] "RemoveContainer" containerID="428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.065390 4957 scope.go:117] "RemoveContainer" containerID="8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94" Nov 28 20:49:52 crc kubenswrapper[4957]: E1128 20:49:52.065687 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.084250 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.103739 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.113276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.113334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.113354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.113382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.113402 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.123935 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.153565 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.172296 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.193525 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.210543 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.215651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.215713 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.215724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.215747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.215800 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.240762 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428eda219c3196760183f0eb5290e000634ad247447f12d5c3e653279c7dbcd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:49Z\\\",\\\"message\\\":\\\"87315 6254 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187604 6254 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.187832 6254 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187888 6254 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187930 6254 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 20:49:49.187934 6254 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 20:49:49.187956 6254 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 20:49:49.188408 6254 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 20:49:49.189004 6254 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.259232 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.280708 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.300434 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.318315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.318365 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.318377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.318394 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.318406 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.319029 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.332120 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.351707 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:52Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.421188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.421276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.421292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.421314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.421330 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.523575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.523615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.523624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.523638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.523646 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.625732 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.625777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.625794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.625815 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.625830 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.728558 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.728598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.728610 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.728626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.728636 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.812353 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:52 crc kubenswrapper[4957]: E1128 20:49:52.812585 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.830545 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.830588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.830597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.830612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.830622 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.932895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.932936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.932951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.932966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:52 crc kubenswrapper[4957]: I1128 20:49:52.932976 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:52Z","lastTransitionTime":"2025-11-28T20:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.035628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.035671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.035681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.035696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.035706 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.070581 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/1.log" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.075525 4957 scope.go:117] "RemoveContainer" containerID="8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94" Nov 28 20:49:53 crc kubenswrapper[4957]: E1128 20:49:53.075935 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.092914 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.111175 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.126308 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.142817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.142882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.142896 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.142917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.142933 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.169255 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6"] Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.169999 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.177111 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.177749 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.178104 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.202334 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.237630 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.245958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.246002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.246012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.246027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.246040 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.252170 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.271770 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30e3b5f4-fdf9-45bc-877e-2f8199648b27-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.271810 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxq7\" (UniqueName: \"kubernetes.io/projected/30e3b5f4-fdf9-45bc-877e-2f8199648b27-kube-api-access-gnxq7\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.271833 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30e3b5f4-fdf9-45bc-877e-2f8199648b27-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.271874 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30e3b5f4-fdf9-45bc-877e-2f8199648b27-env-overrides\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.272650 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.287282 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.298998 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.316565 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.330991 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.341388 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.348424 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.348469 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.348480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.348497 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.348506 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.355199 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.368088 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.373466 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30e3b5f4-fdf9-45bc-877e-2f8199648b27-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.373503 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30e3b5f4-fdf9-45bc-877e-2f8199648b27-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.373535 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxq7\" (UniqueName: \"kubernetes.io/projected/30e3b5f4-fdf9-45bc-877e-2f8199648b27-kube-api-access-gnxq7\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.373587 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30e3b5f4-fdf9-45bc-877e-2f8199648b27-env-overrides\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.374224 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30e3b5f4-fdf9-45bc-877e-2f8199648b27-env-overrides\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.376293 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30e3b5f4-fdf9-45bc-877e-2f8199648b27-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.380813 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.384573 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30e3b5f4-fdf9-45bc-877e-2f8199648b27-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.396769 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.402496 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxq7\" (UniqueName: \"kubernetes.io/projected/30e3b5f4-fdf9-45bc-877e-2f8199648b27-kube-api-access-gnxq7\") pod \"ovnkube-control-plane-749d76644c-clll6\" (UID: \"30e3b5f4-fdf9-45bc-877e-2f8199648b27\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.410535 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.423786 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.438864 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.450073 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.451234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.451264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.451273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.451288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.451299 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.463145 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.475909 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.492196 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.492350 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: W1128 20:49:53.506184 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30e3b5f4_fdf9_45bc_877e_2f8199648b27.slice/crio-740ca2dce622e18b9de5271d2251e974ea61498806de77548960705c547f62ff WatchSource:0}: Error finding container 740ca2dce622e18b9de5271d2251e974ea61498806de77548960705c547f62ff: Status 404 returned error can't find the container with id 740ca2dce622e18b9de5271d2251e974ea61498806de77548960705c547f62ff Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.520496 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.539354 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.555035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.555107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.555125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.555153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.555173 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.559458 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.574993 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.591849 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.659123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.659181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.659193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.659238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.659256 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.763332 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.763400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.763418 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.763446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.763466 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.812512 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.812512 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:53 crc kubenswrapper[4957]: E1128 20:49:53.812677 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:53 crc kubenswrapper[4957]: E1128 20:49:53.812736 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.867033 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.867093 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.867108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.867131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.867149 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.969270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.969302 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.969311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.969324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:53 crc kubenswrapper[4957]: I1128 20:49:53.969332 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:53Z","lastTransitionTime":"2025-11-28T20:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.071599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.071638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.071650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.071668 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.071681 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.078730 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" event={"ID":"30e3b5f4-fdf9-45bc-877e-2f8199648b27","Type":"ContainerStarted","Data":"11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.078777 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" event={"ID":"30e3b5f4-fdf9-45bc-877e-2f8199648b27","Type":"ContainerStarted","Data":"faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.078789 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" event={"ID":"30e3b5f4-fdf9-45bc-877e-2f8199648b27","Type":"ContainerStarted","Data":"740ca2dce622e18b9de5271d2251e974ea61498806de77548960705c547f62ff"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.099001 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.111572 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.127062 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.142014 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.158919 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.174962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.175017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.175030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.175047 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.175057 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.176714 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.195357 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.205870 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.217770 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.230695 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.247770 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.262164 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.274869 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.277197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.277248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.277256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.277269 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.277280 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.293956 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.311604 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.314755 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7zhxb"] Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.315244 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: E1128 20:49:54.315315 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.328898 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.347577 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.364930 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.380072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.380121 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.380131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.380148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.380158 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.383454 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmshz\" (UniqueName: \"kubernetes.io/projected/cccab1fe-132a-4c45-909b-6f1ba7c8abab-kube-api-access-cmshz\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.383500 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.387631 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.400497 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.419143 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.436289 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.458866 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.474890 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.482129 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.482165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.482176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.482195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.482222 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.483940 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmshz\" (UniqueName: \"kubernetes.io/projected/cccab1fe-132a-4c45-909b-6f1ba7c8abab-kube-api-access-cmshz\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.483989 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: E1128 20:49:54.484109 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:54 crc kubenswrapper[4957]: E1128 20:49:54.484158 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:49:54.984143381 +0000 UTC m=+34.452791280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.491489 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.503866 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmshz\" (UniqueName: \"kubernetes.io/projected/cccab1fe-132a-4c45-909b-6f1ba7c8abab-kube-api-access-cmshz\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.506967 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.523030 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.537599 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.554474 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.567546 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.584496 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.584917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.584956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.584969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.584983 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.584993 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.687144 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.687194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.687227 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.687245 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.687259 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.789870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.789931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.789948 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.789972 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.789989 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.812163 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:54 crc kubenswrapper[4957]: E1128 20:49:54.812334 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.893016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.893059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.893068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.893085 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.893095 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.988735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:54 crc kubenswrapper[4957]: E1128 20:49:54.988919 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:54 crc kubenswrapper[4957]: E1128 20:49:54.989031 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:49:55.989002102 +0000 UTC m=+35.457650041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.996118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.996160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.996171 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.996187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:54 crc kubenswrapper[4957]: I1128 20:49:54.996198 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:54Z","lastTransitionTime":"2025-11-28T20:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.098575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.098692 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.098719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.098758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.098784 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.201846 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.202114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.202227 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.202303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.202367 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.305024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.305326 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.305412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.305476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.305562 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.408406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.408723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.408817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.408895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.408974 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.493649 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.493723 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.493866 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.493885 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.493899 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.493951 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:11.493935356 +0000 UTC m=+50.962583285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.494163 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.494260 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.494333 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.494426 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:11.494413748 +0000 UTC m=+50.963061647 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.512999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.513068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.513088 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.513120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.513143 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.594969 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.595162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.595317 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.595363 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:50:11.595309305 +0000 UTC m=+51.063957254 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.595428 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:11.595411827 +0000 UTC m=+51.064059776 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.615749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.615791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.615801 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.615820 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.615834 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.697304 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.697445 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.697573 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:11.697551126 +0000 UTC m=+51.166199035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.718161 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.718201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.718228 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.718244 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.718255 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.812749 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.812873 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.812873 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.812906 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.813160 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:55 crc kubenswrapper[4957]: E1128 20:49:55.813364 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.821650 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.821711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.821725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.821744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.821760 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.923557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.923609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.923620 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.923638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:55 crc kubenswrapper[4957]: I1128 20:49:55.923651 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:55Z","lastTransitionTime":"2025-11-28T20:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.002307 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:56 crc kubenswrapper[4957]: E1128 20:49:56.002470 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:56 crc kubenswrapper[4957]: E1128 20:49:56.002526 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:49:58.002508833 +0000 UTC m=+37.471156742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.027553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.027626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.027640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.027658 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.027674 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.130007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.130048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.130059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.130101 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.130114 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.233948 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.234004 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.234016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.234035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.234049 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.336741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.336818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.336836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.336872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.336895 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.363341 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.378437 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.392498 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.407038 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.429118 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.440063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.440105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.440114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.440129 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.440141 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.444007 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.476420 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.492151 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.504812 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.521678 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.534047 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.542476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.542500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.542511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.542528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.542540 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.552504 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.570329 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.582787 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.607042 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.619994 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.630891 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:56Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.645530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.645588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.645604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.645624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.645638 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.748760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.748811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.748828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.748845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.748857 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.812461 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:56 crc kubenswrapper[4957]: E1128 20:49:56.812594 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.850714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.850759 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.850774 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.850795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.850808 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.953232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.953278 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.953290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.953309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:56 crc kubenswrapper[4957]: I1128 20:49:56.953322 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:56Z","lastTransitionTime":"2025-11-28T20:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.055730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.055833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.055859 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.055902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.055928 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.158764 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.158993 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.159114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.159194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.159326 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.262307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.262549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.262613 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.262689 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.262756 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.365444 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.365484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.365493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.365509 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.365518 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.467523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.467612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.467630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.467664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.467684 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.570459 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.570504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.570526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.570541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.570552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.672439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.672476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.672485 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.672500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.672510 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.775021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.775092 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.775109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.775132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.775149 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.812523 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.812573 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.812608 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:57 crc kubenswrapper[4957]: E1128 20:49:57.812665 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:49:57 crc kubenswrapper[4957]: E1128 20:49:57.812757 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:57 crc kubenswrapper[4957]: E1128 20:49:57.812862 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.877451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.877498 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.877510 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.877527 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.877587 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.979983 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.980027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.980036 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.980052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:57 crc kubenswrapper[4957]: I1128 20:49:57.980062 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:57Z","lastTransitionTime":"2025-11-28T20:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.022654 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:58 crc kubenswrapper[4957]: E1128 20:49:58.022802 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:58 crc kubenswrapper[4957]: E1128 20:49:58.022887 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:50:02.022868786 +0000 UTC m=+41.491516695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.082392 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.082435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.082451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.082466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.082478 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.184707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.184782 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.184800 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.184824 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.184841 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.287173 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.287220 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.287231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.287247 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.287261 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.389120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.389180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.389199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.389276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.389306 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.491982 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.492023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.492034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.492054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.492065 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.594474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.594528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.594548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.594576 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.594598 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.697164 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.697204 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.697240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.697256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.697267 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.799258 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.799318 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.799337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.799360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.799376 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.812720 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:49:58 crc kubenswrapper[4957]: E1128 20:49:58.812903 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.902201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.902304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.902470 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.902508 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:58 crc kubenswrapper[4957]: I1128 20:49:58.902536 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:58Z","lastTransitionTime":"2025-11-28T20:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.005323 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.005438 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.005457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.005483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.005500 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.107393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.107454 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.107472 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.107504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.107522 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.209514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.209560 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.209576 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.209590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.209626 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.312656 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.312693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.312704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.312719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.312730 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.415447 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.415473 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.415482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.415508 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.415518 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.462813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.462874 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.462892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.462922 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.462947 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.480767 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:59Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.485202 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.485282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.485301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.485322 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.485339 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.502754 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:59Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.507156 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.507198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.507233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.507255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.507273 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.524781 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:59Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.529625 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.529681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.529693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.529709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.529723 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.549450 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:59Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.554253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.554320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.554337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.554363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.554380 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.575856 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:49:59Z is after 2025-08-24T17:21:41Z" Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.575959 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.577539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.577583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.577591 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.577604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.577612 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.679873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.679944 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.679968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.679992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.680009 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.782828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.782859 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.782870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.782887 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.782897 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.812557 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.812581 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.812661 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.812559 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.812799 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:49:59 crc kubenswrapper[4957]: E1128 20:49:59.812895 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.885501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.885557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.885567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.885580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.885588 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.987825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.987852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.987860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.987873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:49:59 crc kubenswrapper[4957]: I1128 20:49:59.987883 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:49:59Z","lastTransitionTime":"2025-11-28T20:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.091178 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.091299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.091325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.091348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.091368 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.194731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.194816 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.194836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.194861 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.194879 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.297873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.297935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.297959 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.297990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.298014 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.401266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.401336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.401359 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.401391 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.401414 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.509896 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.509992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.510015 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.510609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.510673 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.614424 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.614565 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.614589 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.614619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.614638 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.718008 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.718084 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.718102 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.718134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.718160 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.812547 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:00 crc kubenswrapper[4957]: E1128 20:50:00.812799 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.820810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.820873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.820892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.820917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.820934 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.834323 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.857884 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.879340 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.893819 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.912104 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.923793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.923832 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.923845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.923864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.923876 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:00Z","lastTransitionTime":"2025-11-28T20:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.926754 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.941115 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.956091 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.969196 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.983930 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:00 crc kubenswrapper[4957]: I1128 20:50:00.997107 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.008432 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:01Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.024795 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:01Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.026447 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.026484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.026496 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.026512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.026525 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.035531 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:01Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.046052 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:01Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.057432 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:01Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.128290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.128328 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.128336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.128352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.128361 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.233811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.233855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.233865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.233881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.233890 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.336643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.336684 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.336726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.336742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.336751 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.439062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.439124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.439141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.439169 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.439188 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.541522 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.541575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.541585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.541600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.541611 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.643892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.643951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.643961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.643975 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.643987 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.746293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.746384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.746399 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.746421 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.746438 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.812248 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:01 crc kubenswrapper[4957]: E1128 20:50:01.812354 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.812256 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.812464 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:01 crc kubenswrapper[4957]: E1128 20:50:01.812594 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:01 crc kubenswrapper[4957]: E1128 20:50:01.812702 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.848716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.848779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.848787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.848801 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.848809 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.951029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.951098 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.951115 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.951143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:01 crc kubenswrapper[4957]: I1128 20:50:01.951161 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:01Z","lastTransitionTime":"2025-11-28T20:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.053793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.053849 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.053868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.053891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.053908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.063538 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:02 crc kubenswrapper[4957]: E1128 20:50:02.063746 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:02 crc kubenswrapper[4957]: E1128 20:50:02.063855 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:50:10.063822796 +0000 UTC m=+49.532470765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.156197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.156259 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.156275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.156293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.156306 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.259571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.259616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.259626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.259640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.259649 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.362032 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.362090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.362108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.362131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.362147 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.464576 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.464723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.464743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.464771 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.464790 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.567994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.568054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.568074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.568097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.568114 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.670991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.671056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.671074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.671100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.671119 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.773252 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.773297 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.773311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.773332 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.773348 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.813091 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:02 crc kubenswrapper[4957]: E1128 20:50:02.813384 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.876595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.876641 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.876652 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.876668 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.876681 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.979572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.979636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.979655 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.979680 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:02 crc kubenswrapper[4957]: I1128 20:50:02.979698 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:02Z","lastTransitionTime":"2025-11-28T20:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.082430 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.082476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.082487 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.082504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.082516 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.184907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.184957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.184971 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.184991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.185003 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.287549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.287598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.287608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.287628 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.287640 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.389317 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.389356 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.389367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.389383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.389392 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.491827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.491873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.491881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.491898 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.491908 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.594313 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.594349 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.594357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.594370 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.594380 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.697024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.697063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.697073 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.697088 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.697099 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.798505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.798548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.798561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.798578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.798591 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.811972 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.811987 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:03 crc kubenswrapper[4957]: E1128 20:50:03.812063 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.812011 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:03 crc kubenswrapper[4957]: E1128 20:50:03.812407 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:03 crc kubenswrapper[4957]: E1128 20:50:03.812523 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.901644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.901690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.901701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.901716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:03 crc kubenswrapper[4957]: I1128 20:50:03.901729 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:03Z","lastTransitionTime":"2025-11-28T20:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.003768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.003797 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.003805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.003817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.003825 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.106251 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.106291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.106305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.106324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.106337 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.209263 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.209308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.209322 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.209340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.209353 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.311353 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.311425 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.311441 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.311461 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.311475 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.414023 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.414110 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.414128 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.414153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.414170 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.516001 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.516071 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.516083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.516101 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.516112 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.618813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.618863 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.618879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.618902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.618920 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.721566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.721856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.721956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.722076 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.722167 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.812187 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:04 crc kubenswrapper[4957]: E1128 20:50:04.812332 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.824417 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.824590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.824675 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.824743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.824813 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.927599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.927857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.927866 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.927879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:04 crc kubenswrapper[4957]: I1128 20:50:04.927887 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:04Z","lastTransitionTime":"2025-11-28T20:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.030344 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.030383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.030391 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.030406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.030417 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.132605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.132918 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.133015 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.133107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.133192 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.235102 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.235460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.235664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.235824 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.235962 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.338675 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.338720 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.338730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.338744 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.338754 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.441021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.441065 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.441077 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.441093 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.441104 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.543922 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.543994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.544012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.544037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.544054 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.647387 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.647647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.647719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.647783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.647844 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.750697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.751290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.751507 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.751696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.751829 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.812324 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:05 crc kubenswrapper[4957]: E1128 20:50:05.812446 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.812551 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:05 crc kubenswrapper[4957]: E1128 20:50:05.812788 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.812986 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:05 crc kubenswrapper[4957]: E1128 20:50:05.813270 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.854894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.855048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.855068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.855092 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.855108 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.957694 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.957757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.957774 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.957798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:05 crc kubenswrapper[4957]: I1128 20:50:05.957816 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:05Z","lastTransitionTime":"2025-11-28T20:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.060643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.060690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.060704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.060724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.060740 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.162374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.162435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.162454 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.162478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.162496 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.264669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.264723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.264745 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.264776 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.264792 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.368351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.368421 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.368443 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.368470 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.368493 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.471311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.471421 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.471434 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.471452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.471802 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.574920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.574957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.574969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.574985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.574995 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.677755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.677852 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.677877 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.677911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.677934 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.781450 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.781526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.781546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.781580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.781604 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.812334 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:06 crc kubenswrapper[4957]: E1128 20:50:06.812526 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.884325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.884383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.884402 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.884428 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.884447 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.987103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.987152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.987165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.987183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:06 crc kubenswrapper[4957]: I1128 20:50:06.987195 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:06Z","lastTransitionTime":"2025-11-28T20:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.090386 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.090460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.090483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.090517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.090539 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.193203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.193362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.193382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.193405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.193421 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.295624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.295653 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.295662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.295674 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.295683 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.398619 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.398686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.398710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.398735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.398752 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.501293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.501357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.501382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.501411 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.501431 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.603608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.603642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.603651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.603665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.603676 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.706344 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.706407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.706426 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.706449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.706467 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.810837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.810891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.810905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.810932 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.810957 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.812314 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.812344 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:07 crc kubenswrapper[4957]: E1128 20:50:07.812566 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:07 crc kubenswrapper[4957]: E1128 20:50:07.813311 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.813392 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:07 crc kubenswrapper[4957]: E1128 20:50:07.814017 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.814634 4957 scope.go:117] "RemoveContainer" containerID="8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.915195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.915727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.915927 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.916071 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:07 crc kubenswrapper[4957]: I1128 20:50:07.916204 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:07Z","lastTransitionTime":"2025-11-28T20:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.019898 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.020007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.020089 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.020123 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.020144 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.122778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.122827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.122838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.122855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.122867 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.126044 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/1.log" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.128276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.128759 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.142516 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.171437 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.183873 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.193599 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.206695 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.220263 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.228793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.228844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.228856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.228873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.228885 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.239545 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.256435 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.274872 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.293435 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.314184 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.328407 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.330912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.330953 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.330965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.330983 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.330997 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.343512 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.353067 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.364465 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.376726 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:08Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.433538 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.433569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.433578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.433590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.433599 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.535662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.535698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.535706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.535720 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.535729 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.638253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.638316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.638333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.638358 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.638391 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.740648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.740686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.740697 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.740712 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.740721 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.812177 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:08 crc kubenswrapper[4957]: E1128 20:50:08.812525 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.842567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.842626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.842639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.842657 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.842672 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.945354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.945393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.945402 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.945417 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:08 crc kubenswrapper[4957]: I1128 20:50:08.945428 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:08Z","lastTransitionTime":"2025-11-28T20:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.048476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.048515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.048526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.048546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.048558 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.133788 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/2.log" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.134686 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/1.log" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.137599 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1" exitCode=1 Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.137649 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.137693 4957 scope.go:117] "RemoveContainer" containerID="8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.138609 4957 scope.go:117] "RemoveContainer" containerID="cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1" Nov 28 20:50:09 crc kubenswrapper[4957]: E1128 20:50:09.138818 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.150930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.150965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.150974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.150989 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.151000 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.154298 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.168942 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.181617 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.195950 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.207032 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.223740 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.236275 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.253769 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.253809 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.253819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.253833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.253844 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.260099 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8395bb15ab7c7270d57c7766e3265f0f96a877c06ef2e0304a9a837aa2c76c94\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:49:51Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917827 6397 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 20:49:50.917882 6397 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.274459 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.286620 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.302493 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.313393 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.326007 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.338303 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.348645 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.355775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.355822 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.355867 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.355885 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.355897 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.362777 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.458355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.458401 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.458414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.458432 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.458441 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.560743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.560804 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.560814 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.560833 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.560844 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.663267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.663336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.663351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.663369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.663381 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.765981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.766025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.766037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.766056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.766069 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.812899 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:09 crc kubenswrapper[4957]: E1128 20:50:09.813062 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.813306 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:09 crc kubenswrapper[4957]: E1128 20:50:09.813378 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.813410 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:09 crc kubenswrapper[4957]: E1128 20:50:09.813538 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.868777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.868819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.868834 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.868853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.868872 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.954761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.954827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.954848 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.954871 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.954888 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: E1128 20:50:09.969316 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.973155 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.973197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.973234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.973255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.973275 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:09 crc kubenswrapper[4957]: E1128 20:50:09.984626 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.988247 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.988292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.988306 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.988329 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:09 crc kubenswrapper[4957]: I1128 20:50:09.988344 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:09Z","lastTransitionTime":"2025-11-28T20:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.000008 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:09Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.003690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.003760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.003769 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.003784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.003793 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.016427 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.023439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.024061 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.024097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.024115 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.024124 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.037646 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.037790 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.039692 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.039745 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.039756 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.039772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.039783 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.139540 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.139716 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.139842 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:50:26.139820346 +0000 UTC m=+65.608468305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.141595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.141631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.141642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.141658 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.141672 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.143185 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/2.log" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.146864 4957 scope.go:117] "RemoveContainer" containerID="cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1" Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.147030 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.163171 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.182525 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.201544 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.232192 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.244574 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.244616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.244645 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.244663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.244675 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.250131 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.268120 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.288247 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.304935 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.322137 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.334838 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.347395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.347464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.347481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.347512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.347532 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.355567 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.371073 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.389844 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.402819 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.415310 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.425991 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.449794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.449843 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.449858 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.449879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.449895 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.551707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.551759 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.551775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.551795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.551813 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.654997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.655055 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.655066 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.655087 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.655099 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.757828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.757905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.757926 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.757957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.757977 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.812813 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:10 crc kubenswrapper[4957]: E1128 20:50:10.812994 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.835884 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.851646 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.859788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.859868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.859883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.859896 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.859905 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.882904 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.895505 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.905645 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.925493 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.942458 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.956952 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.962027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.962093 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.962108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.962125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.962139 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:10Z","lastTransitionTime":"2025-11-28T20:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.975023 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:10 crc kubenswrapper[4957]: I1128 20:50:10.985605 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:10.999917 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:10Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.011667 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.026290 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.047579 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.058649 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.063826 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.063859 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.063872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.063889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.063900 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.074525 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.166041 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.166091 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.166112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.166136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.166153 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.268824 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.268873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.268886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.268904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.268916 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.376306 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.376395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.376414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.376439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.376456 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.479393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.479466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.479507 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.479539 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.479563 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.555313 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.555417 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555587 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555612 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555613 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555630 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555654 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555678 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555713 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:43.555691282 +0000 UTC m=+83.024339221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.555754 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:43.555727943 +0000 UTC m=+83.024375892 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.583165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.583285 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.583300 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.583319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.583331 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.656170 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.656389 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:50:43.656347893 +0000 UTC m=+83.124995842 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.656667 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.656869 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.656976 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:43.656951768 +0000 UTC m=+83.125599737 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.685781 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.685851 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.685875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.685903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.685921 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.757316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.757559 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.757654 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:50:43.757628849 +0000 UTC m=+83.226276798 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.787997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.788072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.788090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.788118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.788136 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.812488 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.812525 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.812591 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.812502 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.812705 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:11 crc kubenswrapper[4957]: E1128 20:50:11.812766 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.862159 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.876376 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.880024 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.890894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.890943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.890958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.890979 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.891000 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.898156 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.928319 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.942538 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.952301 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.966987 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.979039 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.993500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.993540 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.993548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.993564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.993573 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:11Z","lastTransitionTime":"2025-11-28T20:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:11 crc kubenswrapper[4957]: I1128 20:50:11.996663 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:11Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.007757 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.018749 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.030751 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.042580 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.052683 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.067607 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.081365 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.091405 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:12Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.096290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.096314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.096322 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.096335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.096344 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.199557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.199595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.199605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.199618 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.199629 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.305541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.305615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.305651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.305684 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.305711 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.409104 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.409156 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.409166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.409181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.409199 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.511780 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.511838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.511855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.511880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.511897 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.614830 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.615073 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.615137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.615224 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.615285 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.717930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.717990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.718009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.718034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.718052 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.812610 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:12 crc kubenswrapper[4957]: E1128 20:50:12.812805 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.821444 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.821495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.821512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.821536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.821552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.925489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.925548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.925565 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.925588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:12 crc kubenswrapper[4957]: I1128 20:50:12.925605 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:12Z","lastTransitionTime":"2025-11-28T20:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.028578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.028630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.028649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.028671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.028683 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.130947 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.130989 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.131002 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.131019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.131032 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.233742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.233785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.233795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.233812 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.233823 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.336432 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.336484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.336495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.336514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.336528 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.439909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.439961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.439980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.440007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.440027 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.543282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.543331 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.543351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.543375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.543393 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.645920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.645966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.645981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.645998 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.646010 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.748164 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.748240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.748252 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.748269 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.748285 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.812426 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.812460 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.812502 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:13 crc kubenswrapper[4957]: E1128 20:50:13.812668 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:13 crc kubenswrapper[4957]: E1128 20:50:13.812769 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:13 crc kubenswrapper[4957]: E1128 20:50:13.812854 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.850864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.850908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.850920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.850937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.850950 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.953445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.953559 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.953575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.953596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:13 crc kubenswrapper[4957]: I1128 20:50:13.953608 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:13Z","lastTransitionTime":"2025-11-28T20:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.056113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.056167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.056186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.056238 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.056260 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.157813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.157855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.157870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.157892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.157910 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.260289 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.260367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.260393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.260425 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.260452 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.362948 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.363032 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.363062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.363097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.363118 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.466355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.466399 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.466413 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.466445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.466457 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.569040 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.569079 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.569090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.569106 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.569116 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.672552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.672616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.672637 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.672665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.672684 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.776355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.776412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.776437 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.776460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.776476 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.812362 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:14 crc kubenswrapper[4957]: E1128 20:50:14.812549 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.879417 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.879471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.879482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.879495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.879504 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.982608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.982639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.982647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.982661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:14 crc kubenswrapper[4957]: I1128 20:50:14.982670 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:14Z","lastTransitionTime":"2025-11-28T20:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.085698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.086087 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.086298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.086468 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.086644 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.188892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.189119 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.189180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.189269 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.189342 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.292132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.292162 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.292170 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.292231 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.292242 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.394987 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.395017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.395025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.395037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.395045 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.497555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.497936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.498078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.498262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.498405 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.601314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.601384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.601400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.601425 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.601442 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.703630 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.703676 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.703687 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.703706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.703720 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.806159 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.806233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.806249 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.806271 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.806287 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.812383 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.812412 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.812434 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:15 crc kubenswrapper[4957]: E1128 20:50:15.812537 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:15 crc kubenswrapper[4957]: E1128 20:50:15.812719 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:15 crc kubenswrapper[4957]: E1128 20:50:15.812871 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.909501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.909563 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.909575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.909592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:15 crc kubenswrapper[4957]: I1128 20:50:15.909604 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:15Z","lastTransitionTime":"2025-11-28T20:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.011143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.011192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.011249 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.011293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.011312 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.114100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.114152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.114171 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.114197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.114251 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.217308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.217375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.217396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.217423 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.217440 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.320178 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.320235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.320246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.320262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.320275 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.423556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.423631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.423670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.423702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.423766 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.526200 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.526295 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.526360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.526393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.526417 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.629377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.629490 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.629512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.629544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.629567 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.733177 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.733275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.733294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.733319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.733338 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.812356 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:16 crc kubenswrapper[4957]: E1128 20:50:16.812628 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.837020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.837100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.837114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.837133 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.837148 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.940354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.940402 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.940417 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.940434 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:16 crc kubenswrapper[4957]: I1128 20:50:16.940446 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:16Z","lastTransitionTime":"2025-11-28T20:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.043642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.043704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.043725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.043789 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.043814 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.146080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.146133 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.146143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.146158 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.146166 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.249813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.249883 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.249901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.249920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.249934 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.353194 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.353256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.353266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.353282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.353291 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.455282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.455355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.455372 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.455386 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.455395 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.557452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.557505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.557517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.557531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.557559 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.659887 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.659997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.660072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.660105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.660126 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.761811 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.761839 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.761847 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.761860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.761868 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.812909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.813095 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:17 crc kubenswrapper[4957]: E1128 20:50:17.813166 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:17 crc kubenswrapper[4957]: E1128 20:50:17.813086 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.813281 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:17 crc kubenswrapper[4957]: E1128 20:50:17.813467 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.864344 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.864374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.864383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.864395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.864405 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.967367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.967436 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.967453 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.967478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:17 crc kubenswrapper[4957]: I1128 20:50:17.967495 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:17Z","lastTransitionTime":"2025-11-28T20:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.070332 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.070371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.070380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.070393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.070402 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.172039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.172090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.172100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.172112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.172121 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.274438 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.274478 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.274488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.274501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.274511 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.377437 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.377525 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.377541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.377571 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.377590 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.479517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.479570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.479578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.479592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.479601 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.582794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.582828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.582836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.582849 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.582859 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.685965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.686031 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.686048 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.686074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.686092 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.802299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.802369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.802382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.802398 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.802411 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.812147 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:18 crc kubenswrapper[4957]: E1128 20:50:18.812424 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.905805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.905870 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.905886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.905912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:18 crc kubenswrapper[4957]: I1128 20:50:18.905931 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:18Z","lastTransitionTime":"2025-11-28T20:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.007966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.008005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.008014 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.008046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.008062 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.111483 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.111554 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.111569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.111587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.111599 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.213892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.213951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.213969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.213991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.214006 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.317837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.317894 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.317907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.317925 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.317936 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.420621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.420665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.420680 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.420700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.420716 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.523693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.523747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.523763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.523784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.523800 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.627044 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.629794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.629819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.629882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.629910 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.733029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.733072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.733084 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.733101 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.733114 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.812585 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:19 crc kubenswrapper[4957]: E1128 20:50:19.812926 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.812783 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:19 crc kubenswrapper[4957]: E1128 20:50:19.813119 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.812642 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:19 crc kubenswrapper[4957]: E1128 20:50:19.813332 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.837523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.837874 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.837944 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.838028 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.838119 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.940861 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.941296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.941446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.941583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:19 crc kubenswrapper[4957]: I1128 20:50:19.941736 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:19Z","lastTransitionTime":"2025-11-28T20:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.044845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.044880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.044890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.044904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.044916 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.147993 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.148069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.148087 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.148117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.148135 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.250874 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.250941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.250960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.250986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.251004 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.353168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.353284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.353303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.353334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.353354 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.419388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.419439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.419451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.419469 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.419482 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.430664 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.434941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.435003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.435015 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.435031 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.435069 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.447065 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.450959 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.450992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.451001 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.451035 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.451048 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.470157 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.475496 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.475579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.475596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.475625 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.475643 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.490971 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.494562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.494787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.494798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.494817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.494830 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.513763 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.513874 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.515775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.515805 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.515813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.515827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.515837 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.618533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.618577 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.618587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.618602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.618612 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.722253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.722346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.722385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.722420 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.722447 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.811978 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:20 crc kubenswrapper[4957]: E1128 20:50:20.812111 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.826467 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.826514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.826526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.826549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.826565 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.834855 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.852771 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.873567 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.890866 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.905814 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.922056 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.929504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.929546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.929557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.929578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.929592 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:20Z","lastTransitionTime":"2025-11-28T20:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.936648 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.948120 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.959613 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.971576 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.981821 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:20 crc kubenswrapper[4957]: I1128 20:50:20.994799 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:20Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.005848 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:21Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.017228 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:21Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.027562 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:21Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.031303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.031328 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.031336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.031350 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.031359 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.042330 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:21Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.052007 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:21Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.134127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.134193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.134253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.134283 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.134303 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.237637 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.237701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.237710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.237729 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.237739 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.340480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.340570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.340589 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.340616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.340634 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.443196 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.443264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.443278 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.443299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.443314 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.545257 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.545299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.545311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.545328 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.545340 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.647624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.647704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.647725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.647750 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.647768 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.751248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.751341 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.751362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.751391 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.751411 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.812618 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.812750 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:21 crc kubenswrapper[4957]: E1128 20:50:21.812772 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.812945 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:21 crc kubenswrapper[4957]: E1128 20:50:21.813172 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:21 crc kubenswrapper[4957]: E1128 20:50:21.813385 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.854836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.854881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.854892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.854913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.854926 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.958335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.958960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.958987 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.959016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:21 crc kubenswrapper[4957]: I1128 20:50:21.959040 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:21Z","lastTransitionTime":"2025-11-28T20:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.061378 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.061444 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.061463 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.061488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.061507 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.163285 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.163333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.163342 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.163357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.163367 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.266569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.266651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.266679 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.266730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.266754 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.369438 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.369494 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.369538 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.369592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.369610 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.472914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.472985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.473007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.473038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.473062 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.575502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.575544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.575555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.575570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.575580 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.677777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.677829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.677843 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.677862 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.677875 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.780596 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.780638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.780646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.780659 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.780668 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.812026 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:22 crc kubenswrapper[4957]: E1128 20:50:22.812174 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.882573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.882615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.882627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.882643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.882654 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.985651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.985715 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.985734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.985760 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:22 crc kubenswrapper[4957]: I1128 20:50:22.985778 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:22Z","lastTransitionTime":"2025-11-28T20:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.089230 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.089291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.089304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.089326 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.089338 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.195090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.195149 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.195165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.195280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.195298 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.298309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.298373 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.298389 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.298414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.298428 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.401840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.401901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.401915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.401935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.401957 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.505492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.505575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.505595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.505623 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.505645 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.609103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.609172 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.609191 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.609248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.609298 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.711838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.711901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.711923 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.711941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.711955 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.811889 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.811979 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:23 crc kubenswrapper[4957]: E1128 20:50:23.812384 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.812524 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:23 crc kubenswrapper[4957]: E1128 20:50:23.812934 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:23 crc kubenswrapper[4957]: E1128 20:50:23.813069 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.813172 4957 scope.go:117] "RemoveContainer" containerID="cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1" Nov 28 20:50:23 crc kubenswrapper[4957]: E1128 20:50:23.813424 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.814431 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.814458 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.814468 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.814482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.814491 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.919324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.919519 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.919560 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.919597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:23 crc kubenswrapper[4957]: I1128 20:50:23.919629 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:23Z","lastTransitionTime":"2025-11-28T20:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.023548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.023624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.023638 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.023656 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.023668 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.125856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.125903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.125914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.125930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.125942 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.228749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.228796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.228808 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.228825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.228839 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.332166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.332284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.332303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.332340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.332364 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.435203 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.435257 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.435266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.435280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.435290 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.537468 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.537506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.537515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.537530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.537540 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.639843 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.639878 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.639886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.639900 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.639911 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.742109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.742153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.742165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.742179 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.742190 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.812813 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:24 crc kubenswrapper[4957]: E1128 20:50:24.812924 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.844549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.844590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.844598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.844612 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.844623 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.946721 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.946758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.946768 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.946784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:24 crc kubenswrapper[4957]: I1128 20:50:24.946796 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:24Z","lastTransitionTime":"2025-11-28T20:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.049181 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.049236 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.049248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.049264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.049275 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.151187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.151233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.151241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.151254 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.151263 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.253276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.253324 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.253336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.253354 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.253365 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.386186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.386239 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.386252 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.386268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.386278 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.488341 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.488379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.488390 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.488406 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.488417 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.590667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.590950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.590964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.590979 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.590991 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.693069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.693122 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.693131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.693148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.693160 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.795336 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.795373 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.795384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.795398 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.795409 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.812885 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.812906 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.812963 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:25 crc kubenswrapper[4957]: E1128 20:50:25.812980 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:25 crc kubenswrapper[4957]: E1128 20:50:25.813055 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:25 crc kubenswrapper[4957]: E1128 20:50:25.813148 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.898154 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.898195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.898226 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.898242 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:25 crc kubenswrapper[4957]: I1128 20:50:25.898255 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:25Z","lastTransitionTime":"2025-11-28T20:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.000990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.001032 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.001049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.001071 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.001088 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.103288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.103636 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.103736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.103836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.103937 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.206309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.206337 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.206346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.206375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.206384 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.221734 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:26 crc kubenswrapper[4957]: E1128 20:50:26.221863 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:26 crc kubenswrapper[4957]: E1128 20:50:26.221907 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:50:58.221894825 +0000 UTC m=+97.690542734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.308786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.308825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.308849 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.308869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.308878 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.411500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.411747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.411773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.411809 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.411831 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.514617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.514674 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.514685 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.514700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.514710 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.617733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.617792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.617806 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.617831 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.617851 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.721765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.721807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.721818 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.721836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.721849 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.812958 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:26 crc kubenswrapper[4957]: E1128 20:50:26.813316 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.824557 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.824585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.824594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.824609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.824619 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.927835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.927880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.927892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.927911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:26 crc kubenswrapper[4957]: I1128 20:50:26.927923 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:26Z","lastTransitionTime":"2025-11-28T20:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.030362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.030446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.030475 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.030503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.030519 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.134538 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.134590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.134599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.134615 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.134627 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.208762 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/0.log" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.208847 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb1978e2-0fff-4af0-b1d4-e21d677ae377" containerID="1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4" exitCode=1 Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.208898 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerDied","Data":"1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.209656 4957 scope.go:117] "RemoveContainer" containerID="1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.226380 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.236966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.236999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.237009 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.237026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.237036 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.241962 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.262404 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.275124 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.288918 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.309731 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.326501 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.339504 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.339575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.339592 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.339618 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.339635 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.345621 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.359471 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.375728 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.395017 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.410544 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.421674 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.435040 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.441548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.441579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.441587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.441602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.441612 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.446490 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.462717 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.482254 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:27Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.544107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.544148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.544160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.544178 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.544190 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.646569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.646616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.646627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.646641 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.646650 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.749105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.749134 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.749143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.749155 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.749163 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.813024 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:27 crc kubenswrapper[4957]: E1128 20:50:27.813164 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.813017 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.813021 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:27 crc kubenswrapper[4957]: E1128 20:50:27.813391 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:27 crc kubenswrapper[4957]: E1128 20:50:27.813251 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.854255 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.854300 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.854318 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.854335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.854346 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.956453 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.956499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.956510 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.956523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:27 crc kubenswrapper[4957]: I1128 20:50:27.956532 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:27Z","lastTransitionTime":"2025-11-28T20:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.058675 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.058735 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.058748 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.058765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.058777 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.160984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.161021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.161030 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.161046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.161056 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.213725 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/0.log" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.213780 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerStarted","Data":"7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.228674 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.242412 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.257795 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.262787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.262817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.262827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.262840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.262850 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.276275 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.290343 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.301500 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.311682 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.321547 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.334319 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.345373 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.356611 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.366012 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.366046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.366056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.366071 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.366081 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.373924 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.384974 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.395202 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.406289 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.416624 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.437730 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:28Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.470201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.470290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.470308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.470333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.470351 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.572696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.572773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.572785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.572802 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.572813 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.674947 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.674966 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.674974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.674985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.674992 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.777113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.777150 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.777161 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.777175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.777183 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.812730 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:28 crc kubenswrapper[4957]: E1128 20:50:28.812823 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.879229 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.879263 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.879273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.879290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.879302 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.981450 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.981486 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.981496 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.981514 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:28 crc kubenswrapper[4957]: I1128 20:50:28.981524 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:28Z","lastTransitionTime":"2025-11-28T20:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.083156 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.083187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.083197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.083226 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.083237 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.185995 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.186069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.186092 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.186125 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.186147 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.288273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.288303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.288312 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.288340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.288358 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.390994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.391063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.391086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.391115 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.391133 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.493564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.493608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.493617 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.493632 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.493641 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.595765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.595799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.595807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.595820 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.595828 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.698320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.698367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.698379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.698397 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.698408 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.801528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.801578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.801599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.801626 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.801645 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.812486 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.812551 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:29 crc kubenswrapper[4957]: E1128 20:50:29.812643 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.812678 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:29 crc kubenswrapper[4957]: E1128 20:50:29.812751 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:29 crc kubenswrapper[4957]: E1128 20:50:29.812968 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.904153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.904233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.904250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.904273 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:29 crc kubenswrapper[4957]: I1128 20:50:29.904292 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:29Z","lastTransitionTime":"2025-11-28T20:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.007319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.007374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.007390 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.007409 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.007421 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.109863 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.109924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.109943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.109968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.109987 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.211890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.211946 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.211962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.211990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.212010 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.314725 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.314775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.314783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.314799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.314811 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.417367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.417407 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.417417 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.417432 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.417441 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.519013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.519055 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.519070 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.519091 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.519107 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.576373 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.576426 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.576442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.576462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.576475 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.589976 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.593913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.593940 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.593950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.593964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.593975 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.605794 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.609043 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.609075 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.609085 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.609100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.609111 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.624175 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.627094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.627131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.627141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.627157 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.627168 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.639392 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.642796 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.642817 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.642826 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.642837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.642863 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.654396 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.654535 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.655886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.655913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.655941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.655955 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.655964 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.758482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.758520 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.758528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.758541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.758550 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.812231 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:30 crc kubenswrapper[4957]: E1128 20:50:30.812502 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.825484 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.826498 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.834368 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.844575 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.860635 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.860942 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.860953 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.860969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.860979 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.862072 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.874956 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.896654 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.906138 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.916188 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.927460 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.938338 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.949793 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.960474 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.962895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.962984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.963056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.963118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.963191 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:30Z","lastTransitionTime":"2025-11-28T20:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.969387 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.980607 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:30 crc kubenswrapper[4957]: I1128 20:50:30.991359 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:30Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.002707 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:31Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.013668 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:31Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.065433 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.065595 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.065677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.065761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.065818 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.167903 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.168124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.168201 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.168307 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.168365 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.271038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.271281 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.271395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.271476 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.271547 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.373292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.373338 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.373348 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.373366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.373375 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.475714 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.475755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.475767 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.475784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.475795 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.577550 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.577581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.577593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.577608 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.577617 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.680290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.680548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.680677 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.681634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.681705 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.784715 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.784753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.784763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.784781 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.784792 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.812450 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:31 crc kubenswrapper[4957]: E1128 20:50:31.812676 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.812905 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.812952 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:31 crc kubenswrapper[4957]: E1128 20:50:31.813183 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:31 crc kubenswrapper[4957]: E1128 20:50:31.813301 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.887129 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.887168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.887180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.887198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.887226 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.989510 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.989546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.989555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.989569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:31 crc kubenswrapper[4957]: I1128 20:50:31.989581 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:31Z","lastTransitionTime":"2025-11-28T20:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.091500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.091551 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.091562 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.091574 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.091583 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.193404 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.193437 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.193445 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.193457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.193466 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.295668 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.295702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.295711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.295745 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.295754 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.397731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.397772 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.397784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.397801 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.397810 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.499925 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.499973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.499985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.500003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.500014 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.602492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.602523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.602533 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.602548 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.602557 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.704649 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.704686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.704696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.704710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.704719 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.807604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.808304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.808416 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.808568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.808698 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.812993 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:32 crc kubenswrapper[4957]: E1128 20:50:32.813276 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.911429 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.911470 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.911482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.911500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:32 crc kubenswrapper[4957]: I1128 20:50:32.911512 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:32Z","lastTransitionTime":"2025-11-28T20:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.013717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.013753 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.013761 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.013775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.013784 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.116147 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.116190 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.116200 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.116232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.116245 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.218531 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.218569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.218582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.218598 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.218610 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.321414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.321463 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.321481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.321505 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.321522 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.424851 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.424914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.424935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.424968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.424989 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.527413 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.527457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.527465 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.527481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.527489 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.629611 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.629656 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.629670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.629688 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.629701 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.732228 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.732282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.732296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.732314 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.732329 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.812126 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.812195 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.812191 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:33 crc kubenswrapper[4957]: E1128 20:50:33.812372 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:33 crc kubenswrapper[4957]: E1128 20:50:33.812619 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:33 crc kubenswrapper[4957]: E1128 20:50:33.812723 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.835083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.835119 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.835127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.835143 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.835153 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.937882 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.937937 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.937948 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.937965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:33 crc kubenswrapper[4957]: I1128 20:50:33.937976 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:33Z","lastTransitionTime":"2025-11-28T20:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.040006 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.040043 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.040067 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.040082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.040092 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.141863 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.141893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.141900 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.141914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.141922 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.243823 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.243865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.243880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.243901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.243914 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.345865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.345908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.345917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.345932 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.345940 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.448377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.448442 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.448462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.448490 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.448511 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.551970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.552019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.552034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.552049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.552058 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.657325 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.657379 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.657388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.657403 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.657412 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.760724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.760770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.760779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.760795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.760811 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.812926 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:34 crc kubenswrapper[4957]: E1128 20:50:34.813058 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.862751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.862810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.862834 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.862856 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.862870 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.965785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.965838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.965853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.965873 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:34 crc kubenswrapper[4957]: I1128 20:50:34.965888 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:34Z","lastTransitionTime":"2025-11-28T20:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.068956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.069049 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.069066 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.069094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.069116 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.171138 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.171204 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.171252 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.171275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.171296 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.273489 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.273524 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.273532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.273544 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.273552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.376422 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.376519 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.376537 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.376561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.376578 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.479570 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.479668 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.479702 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.479734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.479757 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.582353 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.582399 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.582409 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.582427 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.582447 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.686165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.686270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.686291 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.686315 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.686333 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.789785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.789869 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.789890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.789921 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.789943 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.812759 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.812802 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.812818 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:35 crc kubenswrapper[4957]: E1128 20:50:35.812956 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:35 crc kubenswrapper[4957]: E1128 20:50:35.813102 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:35 crc kubenswrapper[4957]: E1128 20:50:35.813312 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.893599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.893684 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.893698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.893715 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.893727 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.995896 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.995947 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.995960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.995977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:35 crc kubenswrapper[4957]: I1128 20:50:35.995993 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:35Z","lastTransitionTime":"2025-11-28T20:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.099137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.099182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.099193 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.099229 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.099241 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.202120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.202176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.202192 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.202235 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.202252 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.305613 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.305701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.305718 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.305743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.305761 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.408651 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.408694 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.408704 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.408720 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.408731 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.511381 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.511449 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.511466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.511488 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.511564 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.614662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.614698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.614707 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.614723 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.614732 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.717434 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.717763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.717909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.718053 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.718246 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.812095 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:36 crc kubenswrapper[4957]: E1128 20:50:36.812383 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.820195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.820275 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.820286 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.820301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.820309 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.921865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.922133 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.922191 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.922298 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:36 crc kubenswrapper[4957]: I1128 20:50:36.922380 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:36Z","lastTransitionTime":"2025-11-28T20:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.025026 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.025305 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.025386 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.025479 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.025552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.127108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.127145 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.127153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.127168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.127177 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.232994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.233059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.233072 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.233094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.233114 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.335059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.335100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.335112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.335128 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.335139 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.437622 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.437680 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.437698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.437720 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.437738 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.540499 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.540584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.540607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.540639 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.540660 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.643778 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.643827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.643838 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.643857 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.643872 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.746274 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.746318 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.746330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.746347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.746361 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.812181 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.812319 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.812333 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:37 crc kubenswrapper[4957]: E1128 20:50:37.812439 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:37 crc kubenswrapper[4957]: E1128 20:50:37.812539 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:37 crc kubenswrapper[4957]: E1128 20:50:37.812729 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.849357 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.849419 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.849506 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.849545 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.849569 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.952033 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.952120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.952144 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.952177 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:37 crc kubenswrapper[4957]: I1128 20:50:37.952198 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:37Z","lastTransitionTime":"2025-11-28T20:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.054190 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.054282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.054299 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.054320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.054335 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.157034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.157080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.157095 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.157112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.157122 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.259253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.259288 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.259296 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.259309 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.259317 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.362015 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.362057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.362069 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.362085 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.362096 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.464742 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.464771 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.464783 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.464799 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.464812 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.567482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.567534 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.567549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.567568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.567582 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.670267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.670345 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.670364 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.670399 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.670422 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.772911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.772981 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.773000 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.773025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.773042 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.812483 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:38 crc kubenswrapper[4957]: E1128 20:50:38.812665 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.814046 4957 scope.go:117] "RemoveContainer" containerID="cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.877409 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.877484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.877513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.877546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.877573 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.979927 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.979977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.979994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.980027 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:38 crc kubenswrapper[4957]: I1128 20:50:38.980053 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:38Z","lastTransitionTime":"2025-11-28T20:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.083319 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.083371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.083383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.083401 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.083414 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.185955 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.186021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.186038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.186064 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.186080 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.287759 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.287798 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.287807 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.287822 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.287835 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.390061 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.390096 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.390107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.390153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.390164 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.492860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.492911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.492931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.492954 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.492972 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.597785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.597978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.598003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.598057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.598078 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.700931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.701005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.701019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.701039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.701055 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.804810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.804875 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.804893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.804924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.804949 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.812154 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.812170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.812289 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:39 crc kubenswrapper[4957]: E1128 20:50:39.812484 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:39 crc kubenswrapper[4957]: E1128 20:50:39.812639 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:39 crc kubenswrapper[4957]: E1128 20:50:39.812772 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.907911 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.907945 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.907956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.907974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:39 crc kubenswrapper[4957]: I1128 20:50:39.907985 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:39Z","lastTransitionTime":"2025-11-28T20:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.011387 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.011420 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.011446 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.011460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.011469 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.114469 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.114507 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.114516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.114532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.114542 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.217403 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.217439 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.217450 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.217466 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.217480 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.250913 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/2.log" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.255226 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.259000 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.274683 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.286720 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.298902 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.317106 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.319681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.319715 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.319726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.319743 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.319754 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.329020 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.339528 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f64cc30-59c8-4883-a6ef-00ee662935ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d770eac66e6fab266c3b3fb326f244ed3e485e4b546eef8cbaacc011c3dfb9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.351973 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.363966 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.380618 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.393663 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.402173 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.414692 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.421334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.421376 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.421385 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.421398 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.421408 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.425185 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.435677 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.447147 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.460862 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.472036 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.484273 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.523108 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.523141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.523153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.523168 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.523179 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.625145 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.625176 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.625185 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.625197 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.625220 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.660813 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.661105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.661234 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.661421 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.661575 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.673728 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.677074 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.677109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.677121 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.677137 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.677149 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.687958 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.691862 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.691895 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.691909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.691924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.691933 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.704675 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.707779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.707810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.707819 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.707832 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.707841 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.719166 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.722306 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.722335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.722344 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.722358 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.722368 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.738498 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.738662 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.740163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.740195 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.740217 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.740233 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.740243 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.812112 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:40 crc kubenswrapper[4957]: E1128 20:50:40.812255 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.828297 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.839102 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.843716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.843746 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.843757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.843773 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.843783 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.852269 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.867031 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.877300 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.886354 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f64cc30-59c8-4883-a6ef-00ee662935ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d770eac66e6fab266c3b3fb326f244ed3e485e4b546eef8cbaacc011c3dfb9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.896498 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.907191 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.923629 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.935945 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.945572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.945601 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.945609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.945621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.945631 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:40Z","lastTransitionTime":"2025-11-28T20:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.946573 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.960649 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.976013 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:40 crc kubenswrapper[4957]: I1128 20:50:40.988455 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.000332 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:40Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.010103 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.017716 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.027638 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:41Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.048182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.048232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.048241 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.048256 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.048267 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.151082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.151116 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.151124 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.151139 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.151147 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.253600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.253678 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.253700 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.253727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.253744 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.356923 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.356967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.356977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.356993 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.357005 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.460248 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.460396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.460426 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.460456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.460478 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.563365 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.563418 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.563435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.563457 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.563474 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.666621 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.666717 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.666736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.666762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.666779 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.769100 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.769471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.769586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.769695 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.769777 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.812882 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.812890 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:41 crc kubenswrapper[4957]: E1128 20:50:41.813084 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:41 crc kubenswrapper[4957]: E1128 20:50:41.813142 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.812909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:41 crc kubenswrapper[4957]: E1128 20:50:41.813435 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.871770 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.871816 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.871827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.871845 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.871859 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.974516 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.974581 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.974591 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.974607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:41 crc kubenswrapper[4957]: I1128 20:50:41.974616 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:41Z","lastTransitionTime":"2025-11-28T20:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.077532 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.077837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.077975 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.078118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.078283 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.180977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.181014 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.181092 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.181110 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.181122 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.262963 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/3.log" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.263655 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/2.log" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.267777 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" exitCode=1 Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.267842 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.267902 4957 scope.go:117] "RemoveContainer" containerID="cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.270283 4957 scope.go:117] "RemoveContainer" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" Nov 28 20:50:42 crc kubenswrapper[4957]: E1128 20:50:42.270880 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.285537 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.285590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.285613 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.285644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.285666 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.297198 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.316909 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.332433 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.348048 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.361856 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.373590 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.387735 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.387939 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.387968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.387979 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.387997 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.388009 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.409521 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.423260 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.435535 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.458751 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.470197 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.482203 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f64cc30-59c8-4883-a6ef-00ee662935ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d770eac66e6fab266c3b3fb326f244ed3e485e4b546eef8cbaacc011c3dfb9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.490088 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.490259 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.490346 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.490413 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.490472 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.498637 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.514765 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.550570 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:41Z\\\",\\\"message\\\":\\\"fa-8608-def310eb5a2a 5471 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc007696ecd 0xc007696ece}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1128 20:50:40.972298 6980 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.567643 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.582853 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:42Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.592157 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.592292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.592362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.592423 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.592524 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.694663 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.694706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.694722 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.694750 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.694770 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.797422 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.797624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.797724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.797803 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.797925 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.812644 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:42 crc kubenswrapper[4957]: E1128 20:50:42.812766 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.899888 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.899925 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.899936 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.899951 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:42 crc kubenswrapper[4957]: I1128 20:50:42.899961 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:42Z","lastTransitionTime":"2025-11-28T20:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.002686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.003188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.003293 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.003388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.003445 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.106159 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.106401 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.106568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.106749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.106956 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.209543 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.209913 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.210046 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.210166 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.210321 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.274743 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/3.log" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.313600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.313669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.313694 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.313726 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.313748 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.415888 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.416117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.416388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.416602 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.416796 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.519917 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.519960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.519968 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.519985 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.519996 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.604299 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.604392 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.604571 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.604600 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.604618 4957 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.604697 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.604675037 +0000 UTC m=+147.073322976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.605012 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.605053 4957 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.605069 4957 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.605140 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.605120718 +0000 UTC m=+147.073768637 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.623092 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.623146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.623165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.623190 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.623235 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.705650 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.705773 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.705747528 +0000 UTC m=+147.174395447 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.706278 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.706421 4957 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.706493 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.706478147 +0000 UTC m=+147.175126066 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.725555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.725686 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.725711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.725741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.725763 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.806799 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.806905 4957 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.806977 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.806956203 +0000 UTC m=+147.275604132 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.812802 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.813035 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.812851 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.813313 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.812825 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:43 crc kubenswrapper[4957]: E1128 20:50:43.813610 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.828304 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.828500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.828586 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.828666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.828744 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.931280 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.931561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.931671 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.931787 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:43 crc kubenswrapper[4957]: I1128 20:50:43.931907 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:43Z","lastTransitionTime":"2025-11-28T20:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.035034 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.035091 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.035111 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.035141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.035169 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.137528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.137779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.137880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.137991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.138089 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.241567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.241643 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.241666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.241694 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.241715 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.344270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.344311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.344320 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.344332 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.344341 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.446597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.446634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.446641 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.446654 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.446664 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.548898 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.548943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.548958 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.548979 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.548995 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.650902 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.650934 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.650945 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.650961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.650971 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.753493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.753535 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.753547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.753561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.753572 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.812735 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:44 crc kubenswrapper[4957]: E1128 20:50:44.812869 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.855274 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.855316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.855327 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.855341 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.855351 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.956888 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.956938 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.956949 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.956967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:44 crc kubenswrapper[4957]: I1128 20:50:44.956980 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:44Z","lastTransitionTime":"2025-11-28T20:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.060311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.060373 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.060395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.060424 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.060444 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.162696 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.162747 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.162765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.162788 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.162804 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.265008 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.265057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.265068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.265087 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.265100 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.367567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.367623 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.367633 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.367646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.367655 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.469688 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.469730 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.469740 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.469754 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.469765 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.573113 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.573167 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.573184 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.573232 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.573251 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.675523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.675582 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.675599 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.675623 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.675641 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.779031 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.779132 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.779152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.779180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.779202 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.812679 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.812720 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.812694 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:45 crc kubenswrapper[4957]: E1128 20:50:45.812843 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:45 crc kubenswrapper[4957]: E1128 20:50:45.813026 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:45 crc kubenswrapper[4957]: E1128 20:50:45.813150 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.882579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.882690 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.882709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.882734 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.882751 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.986495 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.986573 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.986597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.986624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:45 crc kubenswrapper[4957]: I1128 20:50:45.986642 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:45Z","lastTransitionTime":"2025-11-28T20:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.089172 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.089240 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.089250 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.089267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.089324 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.192292 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.192352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.192372 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.192396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.192414 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.295733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.295795 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.295812 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.295835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.295853 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.398908 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.398941 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.398952 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.398967 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.398977 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.501451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.501547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.501567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.501627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.501646 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.604345 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.604435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.604463 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.604493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.604516 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.706962 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.707008 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.707019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.707037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.707051 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.809344 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.809414 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.809428 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.809452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.809467 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.812906 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:46 crc kubenswrapper[4957]: E1128 20:50:46.813069 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.911667 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.911737 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.911758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.911786 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:46 crc kubenswrapper[4957]: I1128 20:50:46.911803 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:46Z","lastTransitionTime":"2025-11-28T20:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.013511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.013556 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.013567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.013585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.013597 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.115709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.115752 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.115765 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.115782 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.115793 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.217706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.217751 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.217763 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.217779 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.217793 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.321126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.321188 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.321244 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.321276 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.321297 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.424435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.424481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.424492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.424511 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.424522 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.526835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.526889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.526907 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.526930 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.526947 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.629835 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.629897 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.629914 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.629938 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.629955 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.732915 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.732977 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.732996 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.733019 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.733036 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.812596 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.812641 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.812770 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:47 crc kubenswrapper[4957]: E1128 20:50:47.812775 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:47 crc kubenswrapper[4957]: E1128 20:50:47.812959 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:47 crc kubenswrapper[4957]: E1128 20:50:47.813118 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.835430 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.835493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.835515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.835543 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.835568 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.937762 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.937810 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.937829 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.937853 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:47 crc kubenswrapper[4957]: I1128 20:50:47.937870 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:47Z","lastTransitionTime":"2025-11-28T20:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.039920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.039969 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.039984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.040004 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.040021 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.142128 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.142171 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.142182 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.142199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.142230 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.245099 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.245135 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.245146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.245162 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.245173 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.347918 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.347959 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.347970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.347986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.347997 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.450369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.450412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.450421 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.450435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.450444 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.553050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.553090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.553098 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.553115 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.553124 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.655493 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.655566 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.655576 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.655590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.655601 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.757919 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.757980 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.757998 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.758024 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.758043 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.812513 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:48 crc kubenswrapper[4957]: E1128 20:50:48.812693 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.860775 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.860988 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.861103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.861227 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.861311 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.963587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.963665 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.963683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.963706 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:48 crc kubenswrapper[4957]: I1128 20:50:48.963723 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:48Z","lastTransitionTime":"2025-11-28T20:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.066118 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.066180 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.066198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.066262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.066282 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.168301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.168340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.168352 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.168369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.168380 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.270033 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.270075 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.270085 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.270102 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.270114 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.372020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.372303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.372418 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.372513 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.372581 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.474986 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.475029 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.475038 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.475054 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.475063 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.576793 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.576827 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.576841 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.576855 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.576865 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.679340 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.679371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.679381 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.679396 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.679406 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.781964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.781994 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.782005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.782018 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.782029 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.812029 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.812081 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:49 crc kubenswrapper[4957]: E1128 20:50:49.812150 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.812157 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:49 crc kubenswrapper[4957]: E1128 20:50:49.812278 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:49 crc kubenswrapper[4957]: E1128 20:50:49.812359 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.884239 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.884311 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.884335 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.884362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.884382 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.987052 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.987098 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.987109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.987127 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:49 crc kubenswrapper[4957]: I1128 20:50:49.987139 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:49Z","lastTransitionTime":"2025-11-28T20:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.089634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.089892 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.090001 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.090091 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.090190 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.191920 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.192662 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.192782 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.192880 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.192966 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.296607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.298345 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.298388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.298408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.298424 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.400757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.401175 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.401382 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.401564 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.401748 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.504552 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.504641 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.504666 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.504711 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.504731 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.608294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.608575 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.608642 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.608713 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.608776 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.711588 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.711837 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.711905 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.711971 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.712042 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.813063 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.813487 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.814828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.814868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.814884 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.814904 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.814920 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.816501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.816604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.816741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.816816 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.816873 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.829608 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f64cc30-59c8-4883-a6ef-00ee662935ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d770eac66e6fab266c3b3fb326f244ed3e485e4b546eef8cbaacc011c3dfb9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.837382 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.842279 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.842308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.842318 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.842333 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.842344 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.852925 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.860802 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.865003 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.865148 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.865268 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.865363 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.865453 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.870115 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.883760 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.891814 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.891859 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.891871 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.891890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.891902 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.893036 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cca9d4447ca304c078d24088f4ff0ba1df3528943f8bc945b19c982963ab5db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:09Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1128 20:50:08.750288 6622 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:41Z\\\",\\\"message\\\":\\\"fa-8608-def310eb5a2a 5471 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc007696ecd 0xc007696ece}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1128 20:50:40.972298 6980 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.903974 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.905798 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.907593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.907634 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.907644 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.907661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.907672 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.917928 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.918003 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"57605594-99dc-4010-aedb-e801960f2510\\\",\\\"systemUUID\\\":\\\"954acfd8-81a0-40d5-975d-9c927901b7d2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: E1128 20:50:50.918105 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.919404 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.919433 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.919444 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.919462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.919473 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:50Z","lastTransitionTime":"2025-11-28T20:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.929868 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.943515 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.954175 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.965115 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.976112 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.986099 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:50 crc kubenswrapper[4957]: I1128 20:50:50.997663 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:50Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.008578 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.020684 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.022935 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.022965 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.022973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.022990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.023000 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.030277 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.044488 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.052787 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:51Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.125669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.125719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.125731 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.125750 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.125763 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.227728 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.227785 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.227803 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.227826 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.227844 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.330924 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.330978 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.330990 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.331013 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.331026 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.433025 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.433083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.433094 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.433111 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.433123 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.535416 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.535451 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.535460 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.535474 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.535483 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.637851 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.637879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.637886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.637899 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.637907 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.740508 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.740553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.740565 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.740584 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.740597 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.812723 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.812796 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.812762 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:51 crc kubenswrapper[4957]: E1128 20:50:51.812895 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:51 crc kubenswrapper[4957]: E1128 20:50:51.812988 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:51 crc kubenswrapper[4957]: E1128 20:50:51.813133 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.843290 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.843344 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.843360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.843384 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.843402 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.945605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.945652 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.945664 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.945683 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:51 crc kubenswrapper[4957]: I1128 20:50:51.945694 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:51Z","lastTransitionTime":"2025-11-28T20:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.047698 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.047759 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.047777 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.047800 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.047822 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.150485 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.150561 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.150585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.150637 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.150655 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.253162 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.253266 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.253286 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.253308 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.253326 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.356126 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.356200 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.356264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.356294 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.356321 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.459059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.459101 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.459112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.459128 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.459138 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.561067 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.561103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.561114 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.561130 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.561140 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.663116 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.663160 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.663169 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.663183 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.663192 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.766016 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.766057 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.766067 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.766082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.766094 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.812432 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:52 crc kubenswrapper[4957]: E1128 20:50:52.812773 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.869139 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.869199 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.869253 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.869282 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.869303 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.971999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.972053 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.972070 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.972146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:52 crc kubenswrapper[4957]: I1128 20:50:52.972169 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:52Z","lastTransitionTime":"2025-11-28T20:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.074995 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.075056 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.075073 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.075101 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.075123 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.177475 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.177537 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.177555 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.177580 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.177598 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.280648 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.280673 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.280681 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.280693 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.280702 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.383436 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.383547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.383567 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.383646 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.383667 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.485836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.485901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.485923 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.485950 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.485967 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.589257 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.589331 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.589355 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.589386 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.589404 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.692153 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.692267 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.692301 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.692330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.692352 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.795285 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.795364 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.795375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.795395 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.795407 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.812591 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.813152 4957 scope.go:117] "RemoveContainer" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" Nov 28 20:50:53 crc kubenswrapper[4957]: E1128 20:50:53.813340 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.813387 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:53 crc kubenswrapper[4957]: E1128 20:50:53.813534 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.813611 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:53 crc kubenswrapper[4957]: E1128 20:50:53.813689 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:53 crc kubenswrapper[4957]: E1128 20:50:53.813757 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.825452 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f64cc30-59c8-4883-a6ef-00ee662935ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d770eac66e6fab266c3b3fb326f244ed3e485e4b546eef8cbaacc011c3dfb9c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://257faedcde99ebfdaa143be26ab8464ab531b2da35cc8af0198e15c494b903ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.828056 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.846702 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2aec96e-2368-4b45-8ded-618c41094ea9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e6f975dc056212dae1235a4713eb0053ca6dc7bf6516ef1f77160a7e3b5f3f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99e76dce11a3973bc772702e6a1f19e97e843454ef1bb5fecaa6e111d6fc7259\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f93430192fb2d5ed35553b648d66a54dbdc9bbacde2f6aac46557550fb2800ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.865080 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bbb3174d6c93c0b4350da3093d5a70da0b07372718f0a3e76fcd26b110783d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.892995 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"985dfaa6-dc28-434b-9235-b6338e8f331b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:41Z\\\",\\\"message\\\":\\\"fa-8608-def310eb5a2a 5471 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc007696ecd 0xc007696ece}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1128 20:50:40.972298 6980 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:50:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgqww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qhqwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.897889 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.897931 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.897943 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.897960 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.897971 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:53Z","lastTransitionTime":"2025-11-28T20:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.906351 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e3b5f4-fdf9-45bc-877e-2f8199648b27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf653d224fd3d2b530a67036705a5da37187504afe21bd29be69789afb15941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a72ffb863ed2f8c7d7268fc4e5dd568d152057b525bd8adc76ff6bfb45bc61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnxq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-clll6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.921696 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cccab1fe-132a-4c45-909b-6f1ba7c8abab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmshz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7zhxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.941419 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.956856 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.972682 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"511ad33e-85f7-4c7c-acc7-da98dca8339f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60fb9c2e4d2131fa8bdf9cc7ab3ac3bb39b8ba99ef05254c20d90c71273cdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e91240e3b9dee5e46949457b7e94640895fc46c58ac4f81c450769738d6b7053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b10d1f82d12c430ae3da5d0fc87a17bafa82794dcb2b3387a31c31f14a8c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65370f7de66b2f6b17a42387d9463a2ea42313ab6914634fe940e84446df7fd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:53 crc kubenswrapper[4957]: I1128 20:50:53.989688 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72e02ac6e11f2caacaed056f47a905de704fdb1b549b159a0e2dc2dfee8bc085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:53Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.000365 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.000397 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.000405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.000420 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.000432 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.006166 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.022143 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wd5v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40dd1333-011f-44fd-b0ce-2f289af3a4d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3c2a1e6ee6a5d6328c3a3aee6f0fd970d9d4e1700956d5f5136693a68e078fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv7mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wd5v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.041470 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4sml5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb1978e2-0fff-4af0-b1d4-e21d677ae377\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T20:50:26Z\\\",\\\"message\\\":\\\"2025-11-28T20:49:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8\\\\n2025-11-28T20:49:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f9cefa54-70d8-4772-beea-fef58e1ecae8 to /host/opt/cni/bin/\\\\n2025-11-28T20:49:41Z [verbose] multus-daemon started\\\\n2025-11-28T20:49:41Z [verbose] Readiness Indicator file check\\\\n2025-11-28T20:50:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpfpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4sml5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.059934 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d303b09060445ab50eb985b914e22a8deb962b2dfa86afa402112f5438c37b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d206143d4553a2cdab64b2baddb02099f6063a5a304d0e473a10587d692a6493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.074405 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.087846 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d41c2ca-d1ca-46b0-be19-6e4693f0b827\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea88eaea6456a309ab3150f9fa16042f615057a691de402e6f7e15eb2808c01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cq5dp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hq5x2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.103560 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.103604 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.103614 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.103631 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.103643 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.106992 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16fffbf-545b-489a-a0de-da602df9d272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da447f352fbb1613413481109b28c9af8245d5a812f49b6c87d47e5bfa0a5dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7935d834b00c6a639e08631ec962130f7177618f793277587e08e36354e17a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56cc643b12a635b57dcef512ac92fca775490efc238610861ab3233e189adcf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d79f1a88c35865a9f244179a895184aa614648a7ba189fbd02d9ed4e54fb00d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82e4dde92ef248300054d4d4c8a30597032e6fefc6b9c1e3f3eecc65c2a54f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6252c60244a509b380e8a2356875eb258da72dd46528bee6811d8116ea62b19d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1120a599d37fe2939f46e9a17e527525041b2268e802c187c4df81850f52afa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbjsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.123639 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8qkjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16385382-457d-4c77-a56f-30917f1c3f66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0350ad079a75ee0cf235bc9d0d314e7da47a9f560b43fe23a1479b0bec0505ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klvfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8qkjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:50:54Z is after 2025-08-24T17:21:41Z" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.206394 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.206553 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.206585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.206623 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.206656 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.309792 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.309844 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.309860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.309878 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.309892 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.413262 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.413300 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.413317 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.413334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.413345 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.516515 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.516572 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.516587 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.516607 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.516622 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.624536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.625008 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.625037 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.625070 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.625090 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.728412 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.728463 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.728475 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.728502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.728515 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.812871 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:54 crc kubenswrapper[4957]: E1128 20:50:54.813055 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.830005 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.830062 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.830083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.830109 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.830132 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.932480 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.932524 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.932536 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.932554 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:54 crc kubenswrapper[4957]: I1128 20:50:54.932568 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:54Z","lastTransitionTime":"2025-11-28T20:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.035546 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.035600 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.035616 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.035640 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.035657 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.137964 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.137999 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.138007 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.138020 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.138031 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.240597 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.240653 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.240669 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.240692 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.240710 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.344284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.344350 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.344367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.344390 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.344407 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.447973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.448500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.448517 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.448538 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.448552 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.551989 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.552068 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.552105 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.552136 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.552158 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.655661 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.655739 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.655758 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.655790 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.655811 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.759187 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.759285 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.759303 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.759334 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.759356 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.812864 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.812997 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:55 crc kubenswrapper[4957]: E1128 20:50:55.813060 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.813090 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:55 crc kubenswrapper[4957]: E1128 20:50:55.813406 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:55 crc kubenswrapper[4957]: E1128 20:50:55.813510 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.862440 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.862526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.862549 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.862579 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.862603 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.965891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.965976 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.966006 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.966039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:55 crc kubenswrapper[4957]: I1128 20:50:55.966063 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:55Z","lastTransitionTime":"2025-11-28T20:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.068310 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.068362 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.068380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.068400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.068422 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.171021 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.171063 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.171078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.171096 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.171109 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.273975 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.274039 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.274059 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.274087 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.274106 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.377082 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.377146 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.377165 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.377191 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.377240 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.479910 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.479957 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.479970 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.479988 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.480001 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.583312 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.583370 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.583383 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.583399 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.583412 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.685860 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.685897 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.685909 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.685954 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.685969 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.788312 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.788351 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.788360 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.788374 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.788383 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.812762 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:56 crc kubenswrapper[4957]: E1128 20:50:56.813387 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.890316 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.890358 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.890367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.890380 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.890389 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.992741 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.992781 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.992791 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.992828 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:56 crc kubenswrapper[4957]: I1128 20:50:56.992840 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:56Z","lastTransitionTime":"2025-11-28T20:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.095405 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.095435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.095444 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.095456 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.095465 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.197541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.197590 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.197605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.197624 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.197639 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.300710 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.300840 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.300864 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.300893 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.300918 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.403424 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.403471 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.403482 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.403501 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.403514 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.505524 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.505585 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.505593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.505605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.505614 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.607912 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.607974 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.607991 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.608017 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.608033 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.711400 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.711477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.711498 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.711521 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.711537 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.812635 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.812680 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:57 crc kubenswrapper[4957]: E1128 20:50:57.812736 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:57 crc kubenswrapper[4957]: E1128 20:50:57.812890 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.813375 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:57 crc kubenswrapper[4957]: E1128 20:50:57.813650 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.813901 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.813992 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.814089 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.814120 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.814138 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.916228 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.916264 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.916272 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.916286 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:57 crc kubenswrapper[4957]: I1128 20:50:57.916295 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:57Z","lastTransitionTime":"2025-11-28T20:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.019050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.019083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.019091 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.019103 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.019112 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.122330 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.122369 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.122377 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.122393 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.122405 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.224547 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.224583 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.224593 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.224606 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.224616 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.266291 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:58 crc kubenswrapper[4957]: E1128 20:50:58.266481 4957 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:58 crc kubenswrapper[4957]: E1128 20:50:58.266548 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs podName:cccab1fe-132a-4c45-909b-6f1ba7c8abab nodeName:}" failed. No retries permitted until 2025-11-28 20:52:02.266529839 +0000 UTC m=+161.735177748 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs") pod "network-metrics-daemon-7zhxb" (UID: "cccab1fe-132a-4c45-909b-6f1ba7c8abab") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.326745 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.326973 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.327083 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.327246 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.327378 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.430112 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.430152 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.430163 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.430177 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.430189 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.533331 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.533367 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.533375 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.533388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.533396 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.636051 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.636078 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.636086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.636098 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.636106 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.738159 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.738462 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.738523 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.738594 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.738668 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.812906 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:50:58 crc kubenswrapper[4957]: E1128 20:50:58.813489 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.840569 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.840609 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.840620 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.840637 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.840647 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.943186 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.943254 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.943270 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.943284 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:58 crc kubenswrapper[4957]: I1128 20:50:58.943295 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:58Z","lastTransitionTime":"2025-11-28T20:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.045879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.045939 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.045956 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.045984 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.046009 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.148836 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.148868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.148879 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.148891 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.148899 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.250825 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.250863 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.250872 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.250886 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.250895 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.353481 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.353521 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.353530 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.353543 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.353553 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.456198 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.456721 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.456890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.456961 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.457042 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.560050 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.560323 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.560389 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.560526 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.560646 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.663347 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.663605 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.663670 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.663749 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.663811 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.766144 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.766484 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.766503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.766568 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.766587 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.812125 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:50:59 crc kubenswrapper[4957]: E1128 20:50:59.812548 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.812260 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.812193 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:50:59 crc kubenswrapper[4957]: E1128 20:50:59.812906 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:50:59 crc kubenswrapper[4957]: E1128 20:50:59.812817 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.869318 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.869371 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.869388 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.869408 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.869425 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.972435 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.972464 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.972472 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.972485 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:50:59 crc kubenswrapper[4957]: I1128 20:50:59.972494 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:50:59Z","lastTransitionTime":"2025-11-28T20:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.074736 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.074771 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.074780 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.074794 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.074803 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.177627 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.177684 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.177701 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.177724 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.177743 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.280672 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.280716 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.280733 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.280757 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.280774 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.383080 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.383141 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.383674 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.383719 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.383743 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.486578 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.486623 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.486633 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.486647 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.486655 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.589784 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.589868 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.589890 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.589918 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.589943 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.692366 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.692477 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.692502 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.692541 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.692576 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.794657 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.794709 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.794727 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.794755 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.794772 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.812683 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:00 crc kubenswrapper[4957]: E1128 20:51:00.813378 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.851605 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb09f72d-c04d-4c51-a252-2ce9ddbdee3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5176b1f1880ed9e141c941246b0d2adffd33324282b08ff31ff2b20dd8f056a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b2d04db8b356860882df15968cc81d6e742ebf1ae48c6606ff48d707779efa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59cde8df3588c502aeb2171375eeb39308a3e5cf96ea37195d9e5c2ebfd1610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d4321368fe1a765bd9e11beb16cbe7ee7f34712f072d357f9bc57d34f21bb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e7b138191c2204a7ef3b36cec022f0f31b3ec2e2aceb061029fb6bf96dc051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709e2c92dd4a41ed982aac9849c5ee21ccdeef23c49582c2ddcc5d6281359f73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709e2c92dd4a41ed982aac9849c5ee21ccdeef23c49582c2ddcc5d6281359f73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53d12573e5dbb6f09df206db818862e2a7923f7da15b75c45c952cc807a5b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a53d12573e5dbb6f09df206db818862e2a7923f7da15b75c45c952cc807a5b2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec06eef052d16b7e862b018369b878b33744413e06b6b23ca9e234996462a3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec06eef052d16b7e862b018369b878b33744413e06b6b23ca9e234996462a3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:51:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.874603 4957 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a8b44e-3899-49e2-b3c2-20f8559964bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T20:49:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T20:49:38Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 20:49:33.116201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 20:49:33.608283 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1850252239/tls.crt::/tmp/serving-cert-1850252239/tls.key\\\\\\\"\\\\nI1128 20:49:38.921276 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 20:49:38.924484 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 20:49:38.924503 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 20:49:38.924527 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 20:49:38.924532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 20:49:38.930523 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 20:49:38.930556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 20:49:38.930567 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 20:49:38.930573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 20:49:38.930577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 20:49:38.930582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 20:49:38.930764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 20:49:38.932261 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T20:49:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T20:49:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T20:49:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T20:49:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T20:51:00Z is after 2025-08-24T17:21:41Z" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.898043 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.898090 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.898107 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.898131 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.898148 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:00Z","lastTransitionTime":"2025-11-28T20:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.926201 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.9261744 podStartE2EDuration="49.9261744s" podCreationTimestamp="2025-11-28 20:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:00.926070617 +0000 UTC m=+100.394718556" watchObservedRunningTime="2025-11-28 20:51:00.9261744 +0000 UTC m=+100.394822319" Nov 28 20:51:00 crc kubenswrapper[4957]: I1128 20:51:00.974790 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wd5v9" podStartSLOduration=81.974771514 podStartE2EDuration="1m21.974771514s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:00.973730767 +0000 UTC m=+100.442378716" watchObservedRunningTime="2025-11-28 20:51:00.974771514 +0000 UTC m=+100.443419433" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.000047 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.000086 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.000097 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.000117 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.000131 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:01Z","lastTransitionTime":"2025-11-28T20:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.004951 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4sml5" podStartSLOduration=82.004925631 podStartE2EDuration="1m22.004925631s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:00.990440028 +0000 UTC m=+100.459087947" watchObservedRunningTime="2025-11-28 20:51:01.004925631 +0000 UTC m=+100.473573580" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.055305 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podStartSLOduration=82.055280222 podStartE2EDuration="1m22.055280222s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.029406168 +0000 UTC m=+100.498054097" watchObservedRunningTime="2025-11-28 20:51:01.055280222 +0000 UTC m=+100.523928141" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.069160 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8qkjt" podStartSLOduration=82.069141058 podStartE2EDuration="1m22.069141058s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.068635945 +0000 UTC m=+100.537283854" watchObservedRunningTime="2025-11-28 20:51:01.069141058 +0000 UTC m=+100.537788967" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.069555 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dbjsr" podStartSLOduration=82.069549799 podStartE2EDuration="1m22.069549799s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.056515255 +0000 UTC m=+100.525163184" watchObservedRunningTime="2025-11-28 20:51:01.069549799 +0000 UTC m=+100.538197698" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.094334 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.094317844 podStartE2EDuration="31.094317844s" podCreationTimestamp="2025-11-28 20:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.079745419 +0000 UTC m=+100.548393328" watchObservedRunningTime="2025-11-28 20:51:01.094317844 +0000 UTC m=+100.562965753" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.094589 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.094583251 podStartE2EDuration="1m19.094583251s" podCreationTimestamp="2025-11-28 20:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.093767169 +0000 UTC m=+100.562415068" watchObservedRunningTime="2025-11-28 20:51:01.094583251 +0000 UTC m=+100.563231160" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.102428 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.102500 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.102512 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.102528 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.102539 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:01Z","lastTransitionTime":"2025-11-28T20:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.168472 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-clll6" podStartSLOduration=81.168451593 podStartE2EDuration="1m21.168451593s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.153898588 +0000 UTC m=+100.622546507" watchObservedRunningTime="2025-11-28 20:51:01.168451593 +0000 UTC m=+100.637099502" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.196800 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.196850 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.196865 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.196881 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.196894 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:01Z","lastTransitionTime":"2025-11-28T20:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.210452 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.210492 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.210503 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.210518 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.210528 4957 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T20:51:01Z","lastTransitionTime":"2025-11-28T20:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.236625 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z"] Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.237102 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.239052 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.239071 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.239317 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.239697 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.306997 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.306975444 podStartE2EDuration="8.306975444s" podCreationTimestamp="2025-11-28 20:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.305809673 +0000 UTC m=+100.774457582" watchObservedRunningTime="2025-11-28 20:51:01.306975444 +0000 UTC m=+100.775623373" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.402708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5dee76a-2a20-492b-ac3c-37924a90dd24-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.402745 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5dee76a-2a20-492b-ac3c-37924a90dd24-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.402776 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d5dee76a-2a20-492b-ac3c-37924a90dd24-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.402797 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d5dee76a-2a20-492b-ac3c-37924a90dd24-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.402831 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5dee76a-2a20-492b-ac3c-37924a90dd24-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.503702 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5dee76a-2a20-492b-ac3c-37924a90dd24-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.503781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5dee76a-2a20-492b-ac3c-37924a90dd24-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.503803 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5dee76a-2a20-492b-ac3c-37924a90dd24-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.503832 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d5dee76a-2a20-492b-ac3c-37924a90dd24-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.503864 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d5dee76a-2a20-492b-ac3c-37924a90dd24-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.504005 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d5dee76a-2a20-492b-ac3c-37924a90dd24-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.504060 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d5dee76a-2a20-492b-ac3c-37924a90dd24-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.505248 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5dee76a-2a20-492b-ac3c-37924a90dd24-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.513840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5dee76a-2a20-492b-ac3c-37924a90dd24-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.520893 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5dee76a-2a20-492b-ac3c-37924a90dd24-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kmz2z\" (UID: \"d5dee76a-2a20-492b-ac3c-37924a90dd24\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.551896 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.812478 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.812521 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:01 crc kubenswrapper[4957]: I1128 20:51:01.812575 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:01 crc kubenswrapper[4957]: E1128 20:51:01.812616 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:01 crc kubenswrapper[4957]: E1128 20:51:01.812736 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:01 crc kubenswrapper[4957]: E1128 20:51:01.812929 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:02 crc kubenswrapper[4957]: I1128 20:51:02.338160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" event={"ID":"d5dee76a-2a20-492b-ac3c-37924a90dd24","Type":"ContainerStarted","Data":"ab4ea473a83cc9bcbe8023cdd469efa19569c282cabe9ef708f4e32e8b3e562a"} Nov 28 20:51:02 crc kubenswrapper[4957]: I1128 20:51:02.338220 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" event={"ID":"d5dee76a-2a20-492b-ac3c-37924a90dd24","Type":"ContainerStarted","Data":"189b741dbe86f157ad295b4ece3af797ca65edf4bcc2d70d10c4f1529ef60c0d"} Nov 28 20:51:02 crc kubenswrapper[4957]: I1128 20:51:02.353784 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kmz2z" podStartSLOduration=83.3537692 podStartE2EDuration="1m23.3537692s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:02.353283677 +0000 UTC m=+101.821931586" watchObservedRunningTime="2025-11-28 20:51:02.3537692 +0000 UTC m=+101.822417109" Nov 28 20:51:02 crc kubenswrapper[4957]: I1128 20:51:02.353889 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=83.353885153 podStartE2EDuration="1m23.353885153s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:01.325955006 +0000 UTC m=+100.794602915" watchObservedRunningTime="2025-11-28 20:51:02.353885153 +0000 UTC m=+101.822533062" Nov 28 20:51:02 crc kubenswrapper[4957]: I1128 20:51:02.812794 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:02 crc kubenswrapper[4957]: E1128 20:51:02.813006 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:03 crc kubenswrapper[4957]: I1128 20:51:03.812504 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:03 crc kubenswrapper[4957]: I1128 20:51:03.812599 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:03 crc kubenswrapper[4957]: I1128 20:51:03.812504 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:03 crc kubenswrapper[4957]: E1128 20:51:03.812670 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:03 crc kubenswrapper[4957]: E1128 20:51:03.812733 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:03 crc kubenswrapper[4957]: E1128 20:51:03.812826 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:04 crc kubenswrapper[4957]: I1128 20:51:04.812888 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:04 crc kubenswrapper[4957]: E1128 20:51:04.813021 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:04 crc kubenswrapper[4957]: I1128 20:51:04.813764 4957 scope.go:117] "RemoveContainer" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" Nov 28 20:51:04 crc kubenswrapper[4957]: E1128 20:51:04.813929 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:51:05 crc kubenswrapper[4957]: I1128 20:51:05.812903 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:05 crc kubenswrapper[4957]: I1128 20:51:05.812958 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:05 crc kubenswrapper[4957]: I1128 20:51:05.813067 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:05 crc kubenswrapper[4957]: E1128 20:51:05.813262 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:05 crc kubenswrapper[4957]: E1128 20:51:05.813370 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:05 crc kubenswrapper[4957]: E1128 20:51:05.813504 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:06 crc kubenswrapper[4957]: I1128 20:51:06.813107 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:06 crc kubenswrapper[4957]: E1128 20:51:06.813650 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:07 crc kubenswrapper[4957]: I1128 20:51:07.812377 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:07 crc kubenswrapper[4957]: I1128 20:51:07.812395 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:07 crc kubenswrapper[4957]: I1128 20:51:07.812404 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:07 crc kubenswrapper[4957]: E1128 20:51:07.812511 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:07 crc kubenswrapper[4957]: E1128 20:51:07.812708 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:07 crc kubenswrapper[4957]: E1128 20:51:07.812829 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:08 crc kubenswrapper[4957]: I1128 20:51:08.812353 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:08 crc kubenswrapper[4957]: E1128 20:51:08.812520 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:09 crc kubenswrapper[4957]: I1128 20:51:09.812154 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:09 crc kubenswrapper[4957]: I1128 20:51:09.812189 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:09 crc kubenswrapper[4957]: I1128 20:51:09.812154 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:09 crc kubenswrapper[4957]: E1128 20:51:09.812292 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:09 crc kubenswrapper[4957]: E1128 20:51:09.812469 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:09 crc kubenswrapper[4957]: E1128 20:51:09.812532 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:10 crc kubenswrapper[4957]: I1128 20:51:10.812676 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:10 crc kubenswrapper[4957]: E1128 20:51:10.816593 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:11 crc kubenswrapper[4957]: I1128 20:51:11.811878 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:11 crc kubenswrapper[4957]: I1128 20:51:11.811878 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:11 crc kubenswrapper[4957]: E1128 20:51:11.812009 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:11 crc kubenswrapper[4957]: I1128 20:51:11.812092 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:11 crc kubenswrapper[4957]: E1128 20:51:11.812180 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:11 crc kubenswrapper[4957]: E1128 20:51:11.812286 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:12 crc kubenswrapper[4957]: I1128 20:51:12.812115 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:12 crc kubenswrapper[4957]: E1128 20:51:12.812322 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.385950 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/1.log" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.386713 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/0.log" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.386795 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb1978e2-0fff-4af0-b1d4-e21d677ae377" containerID="7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009" exitCode=1 Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.386842 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerDied","Data":"7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009"} Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.386902 4957 scope.go:117] "RemoveContainer" containerID="1fd44afbb201a30cf55c5f78d91b3a0057d0889653828e1d595e7685a4af74b4" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.387394 4957 scope.go:117] "RemoveContainer" containerID="7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009" Nov 28 20:51:13 crc kubenswrapper[4957]: E1128 20:51:13.387573 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4sml5_openshift-multus(cb1978e2-0fff-4af0-b1d4-e21d677ae377)\"" pod="openshift-multus/multus-4sml5" podUID="cb1978e2-0fff-4af0-b1d4-e21d677ae377" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.813082 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.813120 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:13 crc kubenswrapper[4957]: I1128 20:51:13.813086 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:13 crc kubenswrapper[4957]: E1128 20:51:13.813314 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:13 crc kubenswrapper[4957]: E1128 20:51:13.813519 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:13 crc kubenswrapper[4957]: E1128 20:51:13.813672 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:14 crc kubenswrapper[4957]: I1128 20:51:14.391259 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/1.log" Nov 28 20:51:14 crc kubenswrapper[4957]: I1128 20:51:14.812382 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:14 crc kubenswrapper[4957]: E1128 20:51:14.812562 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:15 crc kubenswrapper[4957]: I1128 20:51:15.812931 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:15 crc kubenswrapper[4957]: E1128 20:51:15.813060 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:15 crc kubenswrapper[4957]: I1128 20:51:15.813762 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:15 crc kubenswrapper[4957]: I1128 20:51:15.813833 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:15 crc kubenswrapper[4957]: E1128 20:51:15.813986 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:15 crc kubenswrapper[4957]: E1128 20:51:15.814267 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:16 crc kubenswrapper[4957]: I1128 20:51:16.812966 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:16 crc kubenswrapper[4957]: E1128 20:51:16.813185 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:17 crc kubenswrapper[4957]: I1128 20:51:17.812094 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:17 crc kubenswrapper[4957]: I1128 20:51:17.812104 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:17 crc kubenswrapper[4957]: I1128 20:51:17.812122 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:17 crc kubenswrapper[4957]: E1128 20:51:17.812630 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:17 crc kubenswrapper[4957]: E1128 20:51:17.812700 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:17 crc kubenswrapper[4957]: E1128 20:51:17.812792 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:17 crc kubenswrapper[4957]: I1128 20:51:17.812893 4957 scope.go:117] "RemoveContainer" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" Nov 28 20:51:17 crc kubenswrapper[4957]: E1128 20:51:17.813073 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qhqwg_openshift-ovn-kubernetes(985dfaa6-dc28-434b-9235-b6338e8f331b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" Nov 28 20:51:18 crc kubenswrapper[4957]: I1128 20:51:18.812598 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:18 crc kubenswrapper[4957]: E1128 20:51:18.812719 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:19 crc kubenswrapper[4957]: I1128 20:51:19.812049 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:19 crc kubenswrapper[4957]: E1128 20:51:19.812141 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:19 crc kubenswrapper[4957]: I1128 20:51:19.812310 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:19 crc kubenswrapper[4957]: E1128 20:51:19.812364 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:19 crc kubenswrapper[4957]: I1128 20:51:19.812475 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:19 crc kubenswrapper[4957]: E1128 20:51:19.812600 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:20 crc kubenswrapper[4957]: I1128 20:51:20.812484 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:20 crc kubenswrapper[4957]: E1128 20:51:20.814780 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:20 crc kubenswrapper[4957]: E1128 20:51:20.825201 4957 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 20:51:20 crc kubenswrapper[4957]: E1128 20:51:20.898336 4957 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 20:51:21 crc kubenswrapper[4957]: I1128 20:51:21.812463 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:21 crc kubenswrapper[4957]: I1128 20:51:21.812550 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:21 crc kubenswrapper[4957]: I1128 20:51:21.812616 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:21 crc kubenswrapper[4957]: E1128 20:51:21.813117 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:21 crc kubenswrapper[4957]: E1128 20:51:21.813384 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:21 crc kubenswrapper[4957]: E1128 20:51:21.813512 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:22 crc kubenswrapper[4957]: I1128 20:51:22.812843 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:22 crc kubenswrapper[4957]: E1128 20:51:22.813163 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:23 crc kubenswrapper[4957]: I1128 20:51:23.812921 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:23 crc kubenswrapper[4957]: I1128 20:51:23.812982 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:23 crc kubenswrapper[4957]: I1128 20:51:23.812930 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:23 crc kubenswrapper[4957]: E1128 20:51:23.813143 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:23 crc kubenswrapper[4957]: E1128 20:51:23.813483 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:23 crc kubenswrapper[4957]: E1128 20:51:23.813574 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:24 crc kubenswrapper[4957]: I1128 20:51:24.812303 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:24 crc kubenswrapper[4957]: E1128 20:51:24.812453 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:25 crc kubenswrapper[4957]: I1128 20:51:25.812570 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:25 crc kubenswrapper[4957]: I1128 20:51:25.812634 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:25 crc kubenswrapper[4957]: I1128 20:51:25.812743 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:25 crc kubenswrapper[4957]: E1128 20:51:25.813019 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:25 crc kubenswrapper[4957]: E1128 20:51:25.813121 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:25 crc kubenswrapper[4957]: E1128 20:51:25.813165 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:25 crc kubenswrapper[4957]: E1128 20:51:25.899297 4957 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 20:51:26 crc kubenswrapper[4957]: I1128 20:51:26.813075 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:26 crc kubenswrapper[4957]: E1128 20:51:26.813396 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:26 crc kubenswrapper[4957]: I1128 20:51:26.813456 4957 scope.go:117] "RemoveContainer" containerID="7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009" Nov 28 20:51:27 crc kubenswrapper[4957]: I1128 20:51:27.435763 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/1.log" Nov 28 20:51:27 crc kubenswrapper[4957]: I1128 20:51:27.435829 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerStarted","Data":"52728503a6f4233e9416202f6e1e9c303df45a0d3f7d9730d39c1f04dd6919b4"} Nov 28 20:51:27 crc kubenswrapper[4957]: I1128 20:51:27.811911 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:27 crc kubenswrapper[4957]: I1128 20:51:27.811911 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:27 crc kubenswrapper[4957]: E1128 20:51:27.812040 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:27 crc kubenswrapper[4957]: I1128 20:51:27.811931 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:27 crc kubenswrapper[4957]: E1128 20:51:27.812180 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:27 crc kubenswrapper[4957]: E1128 20:51:27.812231 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:28 crc kubenswrapper[4957]: I1128 20:51:28.812673 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:28 crc kubenswrapper[4957]: E1128 20:51:28.812858 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:28 crc kubenswrapper[4957]: I1128 20:51:28.813588 4957 scope.go:117] "RemoveContainer" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.443329 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/3.log" Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.446013 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerStarted","Data":"97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922"} Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.446711 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.595741 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podStartSLOduration=110.595710201 podStartE2EDuration="1m50.595710201s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:29.477324312 +0000 UTC m=+128.945972221" watchObservedRunningTime="2025-11-28 20:51:29.595710201 +0000 UTC m=+129.064358150" Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.599582 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7zhxb"] Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.599765 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:29 crc kubenswrapper[4957]: E1128 20:51:29.599950 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.812728 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:29 crc kubenswrapper[4957]: I1128 20:51:29.812766 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:29 crc kubenswrapper[4957]: E1128 20:51:29.813256 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:29 crc kubenswrapper[4957]: E1128 20:51:29.813351 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:30 crc kubenswrapper[4957]: I1128 20:51:30.812159 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:30 crc kubenswrapper[4957]: E1128 20:51:30.813948 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:30 crc kubenswrapper[4957]: E1128 20:51:30.899955 4957 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 20:51:31 crc kubenswrapper[4957]: I1128 20:51:31.812673 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:31 crc kubenswrapper[4957]: I1128 20:51:31.812760 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:31 crc kubenswrapper[4957]: E1128 20:51:31.812808 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:31 crc kubenswrapper[4957]: I1128 20:51:31.812845 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:31 crc kubenswrapper[4957]: E1128 20:51:31.812953 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:31 crc kubenswrapper[4957]: E1128 20:51:31.813013 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:32 crc kubenswrapper[4957]: I1128 20:51:32.812092 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:32 crc kubenswrapper[4957]: E1128 20:51:32.812761 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:33 crc kubenswrapper[4957]: I1128 20:51:33.813043 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:33 crc kubenswrapper[4957]: I1128 20:51:33.813064 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:33 crc kubenswrapper[4957]: E1128 20:51:33.813279 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:33 crc kubenswrapper[4957]: I1128 20:51:33.813043 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:33 crc kubenswrapper[4957]: E1128 20:51:33.813345 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:33 crc kubenswrapper[4957]: E1128 20:51:33.813465 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:34 crc kubenswrapper[4957]: I1128 20:51:34.812573 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:34 crc kubenswrapper[4957]: E1128 20:51:34.812794 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 20:51:35 crc kubenswrapper[4957]: I1128 20:51:35.812409 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:35 crc kubenswrapper[4957]: I1128 20:51:35.812687 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:35 crc kubenswrapper[4957]: I1128 20:51:35.812621 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:35 crc kubenswrapper[4957]: E1128 20:51:35.812812 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7zhxb" podUID="cccab1fe-132a-4c45-909b-6f1ba7c8abab" Nov 28 20:51:35 crc kubenswrapper[4957]: E1128 20:51:35.812959 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 20:51:35 crc kubenswrapper[4957]: E1128 20:51:35.813164 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 20:51:36 crc kubenswrapper[4957]: I1128 20:51:36.812394 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:36 crc kubenswrapper[4957]: I1128 20:51:36.815539 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 20:51:36 crc kubenswrapper[4957]: I1128 20:51:36.816846 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.812462 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.812491 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.812491 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.816123 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.816353 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.817827 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 20:51:37 crc kubenswrapper[4957]: I1128 20:51:37.818720 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.254067 4957 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.331790 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hg84w"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.332370 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.335304 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.335568 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.336527 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.336773 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.339228 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f229v"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.339631 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v6sdf"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.339901 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.340495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.341361 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.341394 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.343396 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.343900 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.345592 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2stl"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.346040 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.346640 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.347116 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.348047 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.348379 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.349369 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.350151 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6p7fc"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.350323 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.350794 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.351113 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.351767 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.353665 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.362241 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.362873 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.370176 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.370187 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.373425 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.373432 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.377148 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9529v"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.377632 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.378087 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.379661 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.379888 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.380360 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.383795 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384063 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384131 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384421 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384740 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384742 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384847 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.384879 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.385001 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.385100 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.385107 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.385487 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.386137 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.387036 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.387192 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.387108 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.387371 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.387723 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.387904 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.388177 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.388456 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.388835 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389023 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389229 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389336 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xs7kj"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389488 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389681 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389783 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389722 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.389979 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.390083 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.390188 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.390348 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.390491 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.390648 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.390780 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.391592 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.391742 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.391797 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.391848 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.391932 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.391978 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.392198 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.392243 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.392218 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.396907 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.397383 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.398073 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.398181 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.398397 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.398688 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.398955 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.399489 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.400699 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.401446 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.402034 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.402380 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8bq66"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.402495 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.402643 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417185 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417347 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417414 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417481 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417718 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417813 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.417935 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.418509 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hg84w"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.418647 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.421472 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.421896 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.422853 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.423324 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.423628 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.423954 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.432669 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433398 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433658 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433773 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433873 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433776 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433928 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.433823 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.434073 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.437316 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.437952 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-khfrs"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.438467 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.438733 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.438924 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.440185 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.442706 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.452416 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.452499 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.452840 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.453654 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.453867 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455464 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-audit-policies\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455496 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51349199-ef14-46c8-9511-c14d14305b77-config\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455519 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455556 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-etcd-client\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455588 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-config\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455611 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51349199-ef14-46c8-9511-c14d14305b77-auth-proxy-config\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455633 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxjp\" (UniqueName: \"kubernetes.io/projected/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-kube-api-access-tjxjp\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455647 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fknqp\" (UniqueName: \"kubernetes.io/projected/51349199-ef14-46c8-9511-c14d14305b77-kube-api-access-fknqp\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455799 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-config\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455821 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-serving-cert\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455838 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9418283-90eb-4525-977d-296f994539fd-audit-dir\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dd906bf-c431-4243-96ef-236ed368bf11-trusted-ca\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455882 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff2527a-b897-47e0-92ac-f9319119ee43-serving-cert\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455899 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f84ca592-04c4-4edf-a398-0f879254007f-serving-cert\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455915 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjb2\" (UniqueName: \"kubernetes.io/projected/6dd906bf-c431-4243-96ef-236ed368bf11-kube-api-access-gvjb2\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455933 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-oauth-config\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455946 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455960 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d1df251-3903-4430-a03a-792c9a01051e-serving-cert\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.455983 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-serving-cert\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456004 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74828378-0762-464d-b1c5-bda879361119-serving-cert\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456019 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456036 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/debcc187-997b-4ff2-ae1b-0a187aba449f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zsps7\" (UID: \"debcc187-997b-4ff2-ae1b-0a187aba449f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456053 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-trusted-ca-bundle\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456066 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mwx\" (UniqueName: \"kubernetes.io/projected/f9418283-90eb-4525-977d-296f994539fd-kube-api-access-h6mwx\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456081 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd906bf-c431-4243-96ef-236ed368bf11-serving-cert\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456095 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/51349199-ef14-46c8-9511-c14d14305b77-machine-approver-tls\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456667 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456688 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd906bf-c431-4243-96ef-236ed368bf11-config\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456717 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bpj\" (UniqueName: \"kubernetes.io/projected/debcc187-997b-4ff2-ae1b-0a187aba449f-kube-api-access-s6bpj\") pod \"cluster-samples-operator-665b6dd947-zsps7\" (UID: \"debcc187-997b-4ff2-ae1b-0a187aba449f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456737 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-encryption-config\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456752 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456775 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-service-ca\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456790 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7j9h\" (UniqueName: \"kubernetes.io/projected/b36a4b12-b069-4dc4-a503-936aae20d06e-kube-api-access-v7j9h\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456805 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-config\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456821 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456835 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36a4b12-b069-4dc4-a503-936aae20d06e-config\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456851 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36a4b12-b069-4dc4-a503-936aae20d06e-images\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456865 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdzk\" (UniqueName: \"kubernetes.io/projected/74828378-0762-464d-b1c5-bda879361119-kube-api-access-nrdzk\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456881 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-client-ca\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456906 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36a4b12-b069-4dc4-a503-936aae20d06e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456922 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-oauth-serving-cert\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456942 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-client-ca\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456956 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shckd\" (UniqueName: \"kubernetes.io/projected/f84ca592-04c4-4edf-a398-0f879254007f-kube-api-access-shckd\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pw4z\" (UniqueName: \"kubernetes.io/projected/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-kube-api-access-6pw4z\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456987 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-config\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.457001 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb7q9\" (UniqueName: \"kubernetes.io/projected/eff2527a-b897-47e0-92ac-f9319119ee43-kube-api-access-rb7q9\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.457025 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnk7v\" (UniqueName: \"kubernetes.io/projected/8d1df251-3903-4430-a03a-792c9a01051e-kube-api-access-mnk7v\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.457041 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/74828378-0762-464d-b1c5-bda879361119-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.456905 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.457570 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.494969 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jx2ts"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.495138 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.495311 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.495495 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.495699 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.496310 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.496880 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.497918 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bm8t5"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.509019 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.510859 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.512486 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-t54js"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.512876 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.513196 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.513287 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.513458 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.513734 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.514032 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g8h7t"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.514044 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.514097 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.514664 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.514791 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.515635 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.516336 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.516441 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.516631 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-66ztx"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.516665 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.522267 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.522379 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.523719 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.523865 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.524322 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.524463 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.525451 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.525523 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.526019 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.526337 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f5946"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.526727 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v6sdf"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.526742 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2stl"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.526753 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.526764 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.527057 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.527190 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.527286 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.527302 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.527490 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkd2s"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.527999 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.528657 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.529184 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.530546 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.532419 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-khfrs"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.533525 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.534747 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.535503 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.538159 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.539995 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-66ztx"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.542118 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f229v"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.543892 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xs7kj"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.545440 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.546163 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.547119 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.550405 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6jcwn"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.550882 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.552641 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.555888 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.556399 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.557876 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-client-ca\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.557918 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.557951 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36a4b12-b069-4dc4-a503-936aae20d06e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.557968 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-oauth-serving-cert\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.557993 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558018 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-client-ca\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558033 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shckd\" (UniqueName: \"kubernetes.io/projected/f84ca592-04c4-4edf-a398-0f879254007f-kube-api-access-shckd\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558049 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtb6\" (UniqueName: \"kubernetes.io/projected/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-kube-api-access-5qtb6\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558066 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pw4z\" (UniqueName: \"kubernetes.io/projected/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-kube-api-access-6pw4z\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558082 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-config\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558114 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb7q9\" (UniqueName: \"kubernetes.io/projected/eff2527a-b897-47e0-92ac-f9319119ee43-kube-api-access-rb7q9\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnk7v\" (UniqueName: \"kubernetes.io/projected/8d1df251-3903-4430-a03a-792c9a01051e-kube-api-access-mnk7v\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558146 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/74828378-0762-464d-b1c5-bda879361119-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-audit-policies\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558177 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51349199-ef14-46c8-9511-c14d14305b77-config\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558192 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558226 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-dir\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558250 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-etcd-client\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558265 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-config\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558280 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51349199-ef14-46c8-9511-c14d14305b77-auth-proxy-config\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558298 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558314 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558331 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjxjp\" (UniqueName: \"kubernetes.io/projected/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-kube-api-access-tjxjp\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fknqp\" (UniqueName: \"kubernetes.io/projected/51349199-ef14-46c8-9511-c14d14305b77-kube-api-access-fknqp\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558366 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558382 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-config\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-serving-cert\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558413 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9418283-90eb-4525-977d-296f994539fd-audit-dir\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558427 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dd906bf-c431-4243-96ef-236ed368bf11-trusted-ca\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558444 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-encryption-config\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558460 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ddecc7-b52f-4879-b4cb-af8fb7069448-config\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558475 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558492 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558510 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff2527a-b897-47e0-92ac-f9319119ee43-serving-cert\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558527 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj5gl\" (UniqueName: \"kubernetes.io/projected/80f38516-fdd0-42ed-855f-7f4f01a98786-kube-api-access-kj5gl\") pod \"downloads-7954f5f757-xs7kj\" (UID: \"80f38516-fdd0-42ed-855f-7f4f01a98786\") " pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558774 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-audit\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558792 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558809 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f84ca592-04c4-4edf-a398-0f879254007f-serving-cert\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjb2\" (UniqueName: \"kubernetes.io/projected/6dd906bf-c431-4243-96ef-236ed368bf11-kube-api-access-gvjb2\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558838 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a145189a-74bb-4100-8ba3-52aa988e0163-config\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558853 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558870 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-oauth-config\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558886 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558900 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-etcd-client\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558914 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58ddecc7-b52f-4879-b4cb-af8fb7069448-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558929 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558944 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-node-pullsecrets\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558959 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-audit-dir\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.558987 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a145189a-74bb-4100-8ba3-52aa988e0163-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559009 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d1df251-3903-4430-a03a-792c9a01051e-serving-cert\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559042 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw8mz\" (UniqueName: \"kubernetes.io/projected/ea21530c-53e8-469d-bd38-997357f9b970-kube-api-access-hw8mz\") pod \"dns-operator-744455d44c-khfrs\" (UID: \"ea21530c-53e8-469d-bd38-997357f9b970\") " pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559075 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-serving-cert\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559090 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74828378-0762-464d-b1c5-bda879361119-serving-cert\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559106 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59vvx\" (UniqueName: \"kubernetes.io/projected/27a7baa1-a66c-4c13-be52-2a401578c92d-kube-api-access-59vvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-pj8vl\" (UID: \"27a7baa1-a66c-4c13-be52-2a401578c92d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559120 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-policies\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559137 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559154 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/debcc187-997b-4ff2-ae1b-0a187aba449f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zsps7\" (UID: \"debcc187-997b-4ff2-ae1b-0a187aba449f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559169 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-serving-cert\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-trusted-ca-bundle\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559200 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtjb\" (UniqueName: \"kubernetes.io/projected/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-kube-api-access-kjtjb\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559232 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a145189a-74bb-4100-8ba3-52aa988e0163-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559253 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mwx\" (UniqueName: \"kubernetes.io/projected/f9418283-90eb-4525-977d-296f994539fd-kube-api-access-h6mwx\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559267 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd906bf-c431-4243-96ef-236ed368bf11-serving-cert\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559283 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/51349199-ef14-46c8-9511-c14d14305b77-machine-approver-tls\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559304 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a7baa1-a66c-4c13-be52-2a401578c92d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pj8vl\" (UID: \"27a7baa1-a66c-4c13-be52-2a401578c92d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559318 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-etcd-serving-ca\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559333 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-image-import-ca\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559348 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559362 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dqq9\" (UniqueName: \"kubernetes.io/projected/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-kube-api-access-8dqq9\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559404 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559420 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd906bf-c431-4243-96ef-236ed368bf11-config\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559440 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6bpj\" (UniqueName: \"kubernetes.io/projected/debcc187-997b-4ff2-ae1b-0a187aba449f-kube-api-access-s6bpj\") pod \"cluster-samples-operator-665b6dd947-zsps7\" (UID: \"debcc187-997b-4ff2-ae1b-0a187aba449f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559470 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea21530c-53e8-469d-bd38-997357f9b970-metrics-tls\") pod \"dns-operator-744455d44c-khfrs\" (UID: \"ea21530c-53e8-469d-bd38-997357f9b970\") " pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559485 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-encryption-config\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559489 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51349199-ef14-46c8-9511-c14d14305b77-config\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559522 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559537 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559558 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-service-ca\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559606 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7j9h\" (UniqueName: \"kubernetes.io/projected/b36a4b12-b069-4dc4-a503-936aae20d06e-kube-api-access-v7j9h\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559624 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-config\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559639 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559653 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36a4b12-b069-4dc4-a503-936aae20d06e-config\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559667 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36a4b12-b069-4dc4-a503-936aae20d06e-images\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559686 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrdzk\" (UniqueName: \"kubernetes.io/projected/74828378-0762-464d-b1c5-bda879361119-kube-api-access-nrdzk\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559707 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-config\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.559730 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ddecc7-b52f-4879-b4cb-af8fb7069448-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.560354 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-client-ca\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.560357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-service-ca-bundle\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.560391 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.560748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.561389 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-service-ca\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.561565 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd906bf-c431-4243-96ef-236ed368bf11-config\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.561964 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36a4b12-b069-4dc4-a503-936aae20d06e-config\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.561988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36a4b12-b069-4dc4-a503-936aae20d06e-images\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.563408 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-config\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.563444 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-config\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.563824 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51349199-ef14-46c8-9511-c14d14305b77-auth-proxy-config\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.565196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.565518 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-config\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.565558 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/51349199-ef14-46c8-9511-c14d14305b77-machine-approver-tls\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.566272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9418283-90eb-4525-977d-296f994539fd-audit-dir\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.566241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-encryption-config\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.567105 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-oauth-serving-cert\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.567238 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.567915 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-trusted-ca-bundle\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.568400 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d1df251-3903-4430-a03a-792c9a01051e-config\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.568536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.566029 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-etcd-client\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.569573 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/debcc187-997b-4ff2-ae1b-0a187aba449f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zsps7\" (UID: \"debcc187-997b-4ff2-ae1b-0a187aba449f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.569900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dd906bf-c431-4243-96ef-236ed368bf11-trusted-ca\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.569992 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-client-ca\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.570450 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/74828378-0762-464d-b1c5-bda879361119-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.570610 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9418283-90eb-4525-977d-296f994539fd-audit-policies\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.570707 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.570742 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9418283-90eb-4525-977d-296f994539fd-serving-cert\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.571176 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d1df251-3903-4430-a03a-792c9a01051e-serving-cert\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.572167 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.572145 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd906bf-c431-4243-96ef-236ed368bf11-serving-cert\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.572384 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f84ca592-04c4-4edf-a398-0f879254007f-serving-cert\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.572758 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff2527a-b897-47e0-92ac-f9319119ee43-serving-cert\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.573238 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36a4b12-b069-4dc4-a503-936aae20d06e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.573991 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-oauth-config\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.574449 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-serving-cert\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.574696 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bm8t5"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.574915 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.578644 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.579762 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6p7fc"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.582167 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9529v"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.582652 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74828378-0762-464d-b1c5-bda879361119-serving-cert\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.583751 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.584904 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.586156 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.587693 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jx2ts"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.588950 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.590138 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.591386 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.592761 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f5946"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.594374 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.595313 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkd2s"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.596691 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.598093 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8bq66"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.599126 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.600500 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.602382 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.603623 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r5xmw"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.604978 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g8h7t"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.605066 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.605786 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h4fjb"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.606158 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.606810 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.607877 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r5xmw"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.608997 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h4fjb"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.610369 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lw4b4"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.611378 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.611445 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lw4b4"] Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.614881 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.634325 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.654691 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.660553 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-etcd-serving-ca\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.660601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-image-import-ca\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.660622 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dcccb8-0f23-4caa-8a48-447e043571de-config\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.660658 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrgvd\" (UniqueName: \"kubernetes.io/projected/2ff64fb3-2997-4711-97af-97a674dd4424-kube-api-access-mrgvd\") pod \"package-server-manager-789f6589d5-8lgvk\" (UID: \"2ff64fb3-2997-4711-97af-97a674dd4424\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.660679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-serving-cert\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.660788 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f667aadc-3176-4462-a4e1-38d6d8222d47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661464 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661507 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661535 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-metrics-certs\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661633 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfl5m\" (UniqueName: \"kubernetes.io/projected/1857eeec-a0e1-463e-a77d-a41da08f2b3e-kube-api-access-pfl5m\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661703 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ddecc7-b52f-4879-b4cb-af8fb7069448-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661726 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661746 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tfpv\" (UniqueName: \"kubernetes.io/projected/f1ea6588-c958-4c94-8c43-d4576d12c1d0-kube-api-access-9tfpv\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661762 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-service-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d80547a-482d-49a2-9363-616e21af8403-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661774 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-etcd-serving-ca\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661813 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szsfr\" (UniqueName: \"kubernetes.io/projected/f667aadc-3176-4462-a4e1-38d6d8222d47-kube-api-access-szsfr\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661831 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/345ee1b6-acbf-424c-bb00-f7545f4393ad-tmpfs\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661847 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-node-bootstrap-token\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8123ea-935f-4537-a8ca-83107de89a7e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661882 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qtb6\" (UniqueName: \"kubernetes.io/projected/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-kube-api-access-5qtb6\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661885 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-image-import-ca\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661944 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/345ee1b6-acbf-424c-bb00-f7545f4393ad-webhook-cert\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.661982 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-proxy-tls\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662006 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-dir\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662030 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svmj\" (UniqueName: \"kubernetes.io/projected/9d80547a-482d-49a2-9363-616e21af8403-kube-api-access-8svmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662054 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-srv-cert\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662120 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f667aadc-3176-4462-a4e1-38d6d8222d47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662190 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-dir\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662200 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-encryption-config\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662421 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ddecc7-b52f-4879-b4cb-af8fb7069448-config\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662529 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662608 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662671 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-audit\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662712 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2frh\" (UniqueName: \"kubernetes.io/projected/d19f4f47-257a-4269-96f3-e8892c939e0b-kube-api-access-x2frh\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662831 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a145189a-74bb-4100-8ba3-52aa988e0163-config\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663060 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8123ea-935f-4537-a8ca-83107de89a7e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663085 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-client\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58ddecc7-b52f-4879-b4cb-af8fb7069448-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.662877 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663147 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-node-pullsecrets\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663178 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-audit-dir\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663104 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-audit\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a145189a-74bb-4100-8ba3-52aa988e0163-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663225 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlxn\" (UniqueName: \"kubernetes.io/projected/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-kube-api-access-znlxn\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663243 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd8123ea-935f-4537-a8ca-83107de89a7e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663256 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-node-pullsecrets\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663102 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ddecc7-b52f-4879-b4cb-af8fb7069448-config\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663260 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663310 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpsz\" (UniqueName: \"kubernetes.io/projected/ecb3c993-1aab-4223-9efb-363b35b45e24-kube-api-access-cmpsz\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663326 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-config\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663345 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff64fb3-2997-4711-97af-97a674dd4424-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8lgvk\" (UID: \"2ff64fb3-2997-4711-97af-97a674dd4424\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b8ff6724-d919-4bbf-87c6-3b521739d1a2-signing-key\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663388 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a145189a-74bb-4100-8ba3-52aa988e0163-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663400 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a145189a-74bb-4100-8ba3-52aa988e0163-config\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a7baa1-a66c-4c13-be52-2a401578c92d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pj8vl\" (UID: \"27a7baa1-a66c-4c13-be52-2a401578c92d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663435 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663456 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dqq9\" (UniqueName: \"kubernetes.io/projected/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-kube-api-access-8dqq9\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663500 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea21530c-53e8-469d-bd38-997357f9b970-metrics-tls\") pod \"dns-operator-744455d44c-khfrs\" (UID: \"ea21530c-53e8-469d-bd38-997357f9b970\") " pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663516 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663544 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-stats-auth\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663564 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19f4f47-257a-4269-96f3-e8892c939e0b-config-volume\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663595 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-config\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b39d147-f628-444c-9333-37b05318296e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g8h7t\" (UID: \"6b39d147-f628-444c-9333-37b05318296e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663630 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/345ee1b6-acbf-424c-bb00-f7545f4393ad-apiservice-cert\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663649 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19f4f47-257a-4269-96f3-e8892c939e0b-secret-volume\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663666 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/54dfe258-d553-48ff-b47d-dead72eb7646-images\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663691 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-certs\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663726 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ea6588-c958-4c94-8c43-d4576d12c1d0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663744 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663762 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663785 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qzq\" (UniqueName: \"kubernetes.io/projected/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-kube-api-access-q9qzq\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-audit-dir\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgjvf\" (UniqueName: \"kubernetes.io/projected/54dfe258-d553-48ff-b47d-dead72eb7646-kube-api-access-dgjvf\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.663969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-default-certificate\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664002 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5pqv\" (UniqueName: \"kubernetes.io/projected/41dcccb8-0f23-4caa-8a48-447e043571de-kube-api-access-g5pqv\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664019 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b8ff6724-d919-4bbf-87c6-3b521739d1a2-signing-cabundle\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664056 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664088 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664148 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7ph\" (UniqueName: \"kubernetes.io/projected/b8ff6724-d919-4bbf-87c6-3b521739d1a2-kube-api-access-vf7ph\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664165 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d80547a-482d-49a2-9363-616e21af8403-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664184 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dcccb8-0f23-4caa-8a48-447e043571de-serving-cert\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664216 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj5gl\" (UniqueName: \"kubernetes.io/projected/80f38516-fdd0-42ed-855f-7f4f01a98786-kube-api-access-kj5gl\") pod \"downloads-7954f5f757-xs7kj\" (UID: \"80f38516-fdd0-42ed-855f-7f4f01a98786\") " pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664265 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664302 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ea6588-c958-4c94-8c43-d4576d12c1d0-srv-cert\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664303 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664320 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-etcd-client\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664339 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9f6c\" (UniqueName: \"kubernetes.io/projected/345ee1b6-acbf-424c-bb00-f7545f4393ad-kube-api-access-m9f6c\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664123 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664368 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrgm\" (UniqueName: \"kubernetes.io/projected/07043640-1d98-4100-914f-ace6faae73d7-kube-api-access-jtrgm\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664453 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z7rh\" (UniqueName: \"kubernetes.io/projected/6b39d147-f628-444c-9333-37b05318296e-kube-api-access-4z7rh\") pod \"multus-admission-controller-857f4d67dd-g8h7t\" (UID: \"6b39d147-f628-444c-9333-37b05318296e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664474 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw8mz\" (UniqueName: \"kubernetes.io/projected/ea21530c-53e8-469d-bd38-997357f9b970-kube-api-access-hw8mz\") pod \"dns-operator-744455d44c-khfrs\" (UID: \"ea21530c-53e8-469d-bd38-997357f9b970\") " pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664502 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59vvx\" (UniqueName: \"kubernetes.io/projected/27a7baa1-a66c-4c13-be52-2a401578c92d-kube-api-access-59vvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-pj8vl\" (UID: \"27a7baa1-a66c-4c13-be52-2a401578c92d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664520 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-policies\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664538 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67aafc66-e89d-468e-b26c-c6cd8c842020-service-ca-bundle\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54dfe258-d553-48ff-b47d-dead72eb7646-proxy-tls\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664673 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-serving-cert\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664690 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54dfe258-d553-48ff-b47d-dead72eb7646-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664710 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtjb\" (UniqueName: \"kubernetes.io/projected/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-kube-api-access-kjtjb\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.664729 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4r8r\" (UniqueName: \"kubernetes.io/projected/67aafc66-e89d-468e-b26c-c6cd8c842020-kube-api-access-t4r8r\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.665772 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.667828 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.667900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a145189a-74bb-4100-8ba3-52aa988e0163-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.667986 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-encryption-config\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.668125 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.668305 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.668411 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.668637 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-policies\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.668873 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-config\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.668887 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.669292 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a7baa1-a66c-4c13-be52-2a401578c92d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pj8vl\" (UID: \"27a7baa1-a66c-4c13-be52-2a401578c92d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.669616 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.670375 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.670852 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ddecc7-b52f-4879-b4cb-af8fb7069448-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.671112 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.671755 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.671982 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea21530c-53e8-469d-bd38-997357f9b970-metrics-tls\") pod \"dns-operator-744455d44c-khfrs\" (UID: \"ea21530c-53e8-469d-bd38-997357f9b970\") " pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.672088 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.672382 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-etcd-client\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.672559 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-serving-cert\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.695029 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.714484 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.733986 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.754251 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765248 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2frh\" (UniqueName: \"kubernetes.io/projected/d19f4f47-257a-4269-96f3-e8892c939e0b-kube-api-access-x2frh\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765298 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8123ea-935f-4537-a8ca-83107de89a7e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765317 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-client\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765340 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znlxn\" (UniqueName: \"kubernetes.io/projected/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-kube-api-access-znlxn\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765355 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd8123ea-935f-4537-a8ca-83107de89a7e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-config\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765388 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmpsz\" (UniqueName: \"kubernetes.io/projected/ecb3c993-1aab-4223-9efb-363b35b45e24-kube-api-access-cmpsz\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765420 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff64fb3-2997-4711-97af-97a674dd4424-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8lgvk\" (UID: \"2ff64fb3-2997-4711-97af-97a674dd4424\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765441 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b8ff6724-d919-4bbf-87c6-3b521739d1a2-signing-key\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-stats-auth\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765488 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19f4f47-257a-4269-96f3-e8892c939e0b-config-volume\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765509 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b39d147-f628-444c-9333-37b05318296e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g8h7t\" (UID: \"6b39d147-f628-444c-9333-37b05318296e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765523 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/345ee1b6-acbf-424c-bb00-f7545f4393ad-apiservice-cert\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765539 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19f4f47-257a-4269-96f3-e8892c939e0b-secret-volume\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765555 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/54dfe258-d553-48ff-b47d-dead72eb7646-images\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765577 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-certs\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765594 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765611 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ea6588-c958-4c94-8c43-d4576d12c1d0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765850 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765866 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-default-certificate\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765887 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qzq\" (UniqueName: \"kubernetes.io/projected/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-kube-api-access-q9qzq\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765904 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgjvf\" (UniqueName: \"kubernetes.io/projected/54dfe258-d553-48ff-b47d-dead72eb7646-kube-api-access-dgjvf\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5pqv\" (UniqueName: \"kubernetes.io/projected/41dcccb8-0f23-4caa-8a48-447e043571de-kube-api-access-g5pqv\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765946 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b8ff6724-d919-4bbf-87c6-3b521739d1a2-signing-cabundle\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765974 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7ph\" (UniqueName: \"kubernetes.io/projected/b8ff6724-d919-4bbf-87c6-3b521739d1a2-kube-api-access-vf7ph\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.765991 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d80547a-482d-49a2-9363-616e21af8403-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dcccb8-0f23-4caa-8a48-447e043571de-serving-cert\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766028 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ea6588-c958-4c94-8c43-d4576d12c1d0-srv-cert\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766046 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9f6c\" (UniqueName: \"kubernetes.io/projected/345ee1b6-acbf-424c-bb00-f7545f4393ad-kube-api-access-m9f6c\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrgm\" (UniqueName: \"kubernetes.io/projected/07043640-1d98-4100-914f-ace6faae73d7-kube-api-access-jtrgm\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766080 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z7rh\" (UniqueName: \"kubernetes.io/projected/6b39d147-f628-444c-9333-37b05318296e-kube-api-access-4z7rh\") pod \"multus-admission-controller-857f4d67dd-g8h7t\" (UID: \"6b39d147-f628-444c-9333-37b05318296e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766108 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67aafc66-e89d-468e-b26c-c6cd8c842020-service-ca-bundle\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766131 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54dfe258-d553-48ff-b47d-dead72eb7646-proxy-tls\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766146 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54dfe258-d553-48ff-b47d-dead72eb7646-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4r8r\" (UniqueName: \"kubernetes.io/projected/67aafc66-e89d-468e-b26c-c6cd8c842020-kube-api-access-t4r8r\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrgvd\" (UniqueName: \"kubernetes.io/projected/2ff64fb3-2997-4711-97af-97a674dd4424-kube-api-access-mrgvd\") pod \"package-server-manager-789f6589d5-8lgvk\" (UID: \"2ff64fb3-2997-4711-97af-97a674dd4424\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766226 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-serving-cert\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766251 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dcccb8-0f23-4caa-8a48-447e043571de-config\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766268 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f667aadc-3176-4462-a4e1-38d6d8222d47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766285 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-metrics-certs\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfl5m\" (UniqueName: \"kubernetes.io/projected/1857eeec-a0e1-463e-a77d-a41da08f2b3e-kube-api-access-pfl5m\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766336 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tfpv\" (UniqueName: \"kubernetes.io/projected/f1ea6588-c958-4c94-8c43-d4576d12c1d0-kube-api-access-9tfpv\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-service-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766369 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d80547a-482d-49a2-9363-616e21af8403-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szsfr\" (UniqueName: \"kubernetes.io/projected/f667aadc-3176-4462-a4e1-38d6d8222d47-kube-api-access-szsfr\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766400 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/345ee1b6-acbf-424c-bb00-f7545f4393ad-tmpfs\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766414 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-node-bootstrap-token\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766429 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8123ea-935f-4537-a8ca-83107de89a7e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766455 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/345ee1b6-acbf-424c-bb00-f7545f4393ad-webhook-cert\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766476 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766495 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-proxy-tls\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766516 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svmj\" (UniqueName: \"kubernetes.io/projected/9d80547a-482d-49a2-9363-616e21af8403-kube-api-access-8svmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766531 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-srv-cert\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766555 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f667aadc-3176-4462-a4e1-38d6d8222d47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.766575 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.767632 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54dfe258-d553-48ff-b47d-dead72eb7646-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.767838 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/345ee1b6-acbf-424c-bb00-f7545f4393ad-tmpfs\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.768425 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.774775 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.794870 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.814245 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.834181 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.840946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8123ea-935f-4537-a8ca-83107de89a7e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.854595 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.859049 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.874673 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.877563 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8123ea-935f-4537-a8ca-83107de89a7e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.900108 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.908665 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.914078 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.934940 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.942733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-metrics-certs\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.954899 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.974507 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 20:51:42 crc kubenswrapper[4957]: I1128 20:51:42.994922 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.015421 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.017818 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67aafc66-e89d-468e-b26c-c6cd8c842020-service-ca-bundle\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.033978 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.054738 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.060932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-default-certificate\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.075376 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.095051 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.100754 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/67aafc66-e89d-468e-b26c-c6cd8c842020-stats-auth\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.115700 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.120760 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d80547a-482d-49a2-9363-616e21af8403-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.134093 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.137702 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d80547a-482d-49a2-9363-616e21af8403-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.156134 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.175552 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.194871 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.199947 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b39d147-f628-444c-9333-37b05318296e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g8h7t\" (UID: \"6b39d147-f628-444c-9333-37b05318296e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.215602 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.235492 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.254584 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.261917 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-srv-cert\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.274986 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.298301 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.310953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ea6588-c958-4c94-8c43-d4576d12c1d0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.311197 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.311997 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19f4f47-257a-4269-96f3-e8892c939e0b-secret-volume\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.313836 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.320597 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ea6588-c958-4c94-8c43-d4576d12c1d0-srv-cert\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.334775 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.340062 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b8ff6724-d919-4bbf-87c6-3b521739d1a2-signing-key\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.355161 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.358997 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b8ff6724-d919-4bbf-87c6-3b521739d1a2-signing-cabundle\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.374202 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.395261 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.414525 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.434094 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.441462 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41dcccb8-0f23-4caa-8a48-447e043571de-serving-cert\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.454128 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.475148 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.494389 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.514639 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.533428 4957 request.go:700] Waited for 1.00872707s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.536165 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.542825 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-proxy-tls\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.555564 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.558773 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41dcccb8-0f23-4caa-8a48-447e043571de-config\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.575861 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.595901 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.616349 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.617805 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19f4f47-257a-4269-96f3-e8892c939e0b-config-volume\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.635500 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.642117 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff64fb3-2997-4711-97af-97a674dd4424-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8lgvk\" (UID: \"2ff64fb3-2997-4711-97af-97a674dd4424\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.655040 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.674989 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.695657 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.715195 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.721843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54dfe258-d553-48ff-b47d-dead72eb7646-proxy-tls\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.734698 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.738274 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/54dfe258-d553-48ff-b47d-dead72eb7646-images\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.755418 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.762006 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/345ee1b6-acbf-424c-bb00-f7545f4393ad-apiservice-cert\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.763388 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/345ee1b6-acbf-424c-bb00-f7545f4393ad-webhook-cert\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767084 4957 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767120 4957 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767092 4957 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767180 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-config podName:1857eeec-a0e1-463e-a77d-a41da08f2b3e nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.267152604 +0000 UTC m=+143.735800553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-config") pod "etcd-operator-b45778765-bkd2s" (UID: "1857eeec-a0e1-463e-a77d-a41da08f2b3e") : failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767236 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-certs podName:07043640-1d98-4100-914f-ace6faae73d7 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.267197956 +0000 UTC m=+143.735845905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-certs") pod "machine-config-server-6jcwn" (UID: "07043640-1d98-4100-914f-ace6faae73d7") : failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767263 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-client podName:1857eeec-a0e1-463e-a77d-a41da08f2b3e nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.267249287 +0000 UTC m=+143.735897236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-client") pod "etcd-operator-b45778765-bkd2s" (UID: "1857eeec-a0e1-463e-a77d-a41da08f2b3e") : failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767358 4957 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767435 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f667aadc-3176-4462-a4e1-38d6d8222d47-config podName:f667aadc-3176-4462-a4e1-38d6d8222d47 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.267414513 +0000 UTC m=+143.736062512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f667aadc-3176-4462-a4e1-38d6d8222d47-config") pod "openshift-controller-manager-operator-756b6f6bc6-6pfc5" (UID: "f667aadc-3176-4462-a4e1-38d6d8222d47") : failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767449 4957 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767511 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-serving-cert podName:1857eeec-a0e1-463e-a77d-a41da08f2b3e nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.267498036 +0000 UTC m=+143.736145985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-serving-cert") pod "etcd-operator-b45778765-bkd2s" (UID: "1857eeec-a0e1-463e-a77d-a41da08f2b3e") : failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767569 4957 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.767630 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-service-ca podName:1857eeec-a0e1-463e-a77d-a41da08f2b3e nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.26761454 +0000 UTC m=+143.736262459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-service-ca") pod "etcd-operator-b45778765-bkd2s" (UID: "1857eeec-a0e1-463e-a77d-a41da08f2b3e") : failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.768184 4957 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.768226 4957 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.768271 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-node-bootstrap-token podName:07043640-1d98-4100-914f-ace6faae73d7 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.268254282 +0000 UTC m=+143.736902201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-node-bootstrap-token") pod "machine-config-server-6jcwn" (UID: "07043640-1d98-4100-914f-ace6faae73d7") : failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.768292 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f667aadc-3176-4462-a4e1-38d6d8222d47-serving-cert podName:f667aadc-3176-4462-a4e1-38d6d8222d47 nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.268281983 +0000 UTC m=+143.736929902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f667aadc-3176-4462-a4e1-38d6d8222d47-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-6pfc5" (UID: "f667aadc-3176-4462-a4e1-38d6d8222d47") : failed to sync secret cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.768191 4957 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: E1128 20:51:43.768856 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-ca podName:1857eeec-a0e1-463e-a77d-a41da08f2b3e nodeName:}" failed. No retries permitted until 2025-11-28 20:51:44.268831862 +0000 UTC m=+143.737479811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-ca") pod "etcd-operator-b45778765-bkd2s" (UID: "1857eeec-a0e1-463e-a77d-a41da08f2b3e") : failed to sync configmap cache: timed out waiting for the condition Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.774264 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.795730 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.815840 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.834709 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.854296 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.874517 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.895520 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.914951 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.935473 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.964334 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.976313 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 20:51:43 crc kubenswrapper[4957]: I1128 20:51:43.994800 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.015366 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.035474 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.054977 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.075133 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.095747 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.115779 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.134766 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.154768 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.175167 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.222098 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6bpj\" (UniqueName: \"kubernetes.io/projected/debcc187-997b-4ff2-ae1b-0a187aba449f-kube-api-access-s6bpj\") pod \"cluster-samples-operator-665b6dd947-zsps7\" (UID: \"debcc187-997b-4ff2-ae1b-0a187aba449f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.243073 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrdzk\" (UniqueName: \"kubernetes.io/projected/74828378-0762-464d-b1c5-bda879361119-kube-api-access-nrdzk\") pod \"openshift-config-operator-7777fb866f-bf7f4\" (UID: \"74828378-0762-464d-b1c5-bda879361119\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.252026 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.258156 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7j9h\" (UniqueName: \"kubernetes.io/projected/b36a4b12-b069-4dc4-a503-936aae20d06e-kube-api-access-v7j9h\") pod \"machine-api-operator-5694c8668f-f229v\" (UID: \"b36a4b12-b069-4dc4-a503-936aae20d06e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.271522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjxjp\" (UniqueName: \"kubernetes.io/projected/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-kube-api-access-tjxjp\") pod \"console-f9d7485db-6p7fc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.290733 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fknqp\" (UniqueName: \"kubernetes.io/projected/51349199-ef14-46c8-9511-c14d14305b77-kube-api-access-fknqp\") pod \"machine-approver-56656f9798-gnnhv\" (UID: \"51349199-ef14-46c8-9511-c14d14305b77\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.299983 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-certs\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.300419 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-serving-cert\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.300556 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f667aadc-3176-4462-a4e1-38d6d8222d47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.300694 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-service-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.300820 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-node-bootstrap-token\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.301047 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f667aadc-3176-4462-a4e1-38d6d8222d47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.301192 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.301410 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-client\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.301560 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-service-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.301656 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f667aadc-3176-4462-a4e1-38d6d8222d47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.301830 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-config\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.302057 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-ca\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.302359 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.302483 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857eeec-a0e1-463e-a77d-a41da08f2b3e-config\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.302972 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-certs\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.303353 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f667aadc-3176-4462-a4e1-38d6d8222d47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.304120 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-etcd-client\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.304740 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/07043640-1d98-4100-914f-ace6faae73d7-node-bootstrap-token\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.304836 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1857eeec-a0e1-463e-a77d-a41da08f2b3e-serving-cert\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.307953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mwx\" (UniqueName: \"kubernetes.io/projected/f9418283-90eb-4525-977d-296f994539fd-kube-api-access-h6mwx\") pod \"apiserver-7bbb656c7d-mhtxz\" (UID: \"f9418283-90eb-4525-977d-296f994539fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.315675 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.328472 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shckd\" (UniqueName: \"kubernetes.io/projected/f84ca592-04c4-4edf-a398-0f879254007f-kube-api-access-shckd\") pod \"route-controller-manager-6576b87f9c-5tzvk\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.353421 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnk7v\" (UniqueName: \"kubernetes.io/projected/8d1df251-3903-4430-a03a-792c9a01051e-kube-api-access-mnk7v\") pod \"authentication-operator-69f744f599-b2stl\" (UID: \"8d1df251-3903-4430-a03a-792c9a01051e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.373979 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjb2\" (UniqueName: \"kubernetes.io/projected/6dd906bf-c431-4243-96ef-236ed368bf11-kube-api-access-gvjb2\") pod \"console-operator-58897d9998-v6sdf\" (UID: \"6dd906bf-c431-4243-96ef-236ed368bf11\") " pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.391981 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pw4z\" (UniqueName: \"kubernetes.io/projected/fec5d988-b5a3-4aa6-90a5-d62e31b8276b-kube-api-access-6pw4z\") pod \"openshift-apiserver-operator-796bbdcf4f-jqbl7\" (UID: \"fec5d988-b5a3-4aa6-90a5-d62e31b8276b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.412730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb7q9\" (UniqueName: \"kubernetes.io/projected/eff2527a-b897-47e0-92ac-f9319119ee43-kube-api-access-rb7q9\") pod \"controller-manager-879f6c89f-hg84w\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.435660 4957 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.454068 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.454705 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.461383 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.469151 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.474772 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.476152 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.489038 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.495378 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.499112 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.515173 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.531361 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.534410 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.553172 4957 request.go:700] Waited for 1.946823567s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.554751 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.575422 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.595225 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.614938 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.625318 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.652239 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.669639 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qtb6\" (UniqueName: \"kubernetes.io/projected/6dc37571-86e4-4d8c-bc0f-97c53da56e4f-kube-api-access-5qtb6\") pod \"cluster-image-registry-operator-dc59b4c8b-n7xjd\" (UID: \"6dc37571-86e4-4d8c-bc0f-97c53da56e4f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.689548 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58ddecc7-b52f-4879-b4cb-af8fb7069448-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r26lm\" (UID: \"58ddecc7-b52f-4879-b4cb-af8fb7069448\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.705809 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.710594 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a145189a-74bb-4100-8ba3-52aa988e0163-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qgvqz\" (UID: \"a145189a-74bb-4100-8ba3-52aa988e0163\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.710881 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.717399 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.730724 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj5gl\" (UniqueName: \"kubernetes.io/projected/80f38516-fdd0-42ed-855f-7f4f01a98786-kube-api-access-kj5gl\") pod \"downloads-7954f5f757-xs7kj\" (UID: \"80f38516-fdd0-42ed-855f-7f4f01a98786\") " pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.753083 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59vvx\" (UniqueName: \"kubernetes.io/projected/27a7baa1-a66c-4c13-be52-2a401578c92d-kube-api-access-59vvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-pj8vl\" (UID: \"27a7baa1-a66c-4c13-be52-2a401578c92d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.768739 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dqq9\" (UniqueName: \"kubernetes.io/projected/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-kube-api-access-8dqq9\") pod \"oauth-openshift-558db77b4-9529v\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.841778 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw8mz\" (UniqueName: \"kubernetes.io/projected/ea21530c-53e8-469d-bd38-997357f9b970-kube-api-access-hw8mz\") pod \"dns-operator-744455d44c-khfrs\" (UID: \"ea21530c-53e8-469d-bd38-997357f9b970\") " pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.853256 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtjb\" (UniqueName: \"kubernetes.io/projected/70eb3caa-0533-4620-9dc5-4e5b9c4581bc-kube-api-access-kjtjb\") pod \"apiserver-76f77b778f-8bq66\" (UID: \"70eb3caa-0533-4620-9dc5-4e5b9c4581bc\") " pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.857514 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5pqv\" (UniqueName: \"kubernetes.io/projected/41dcccb8-0f23-4caa-8a48-447e043571de-kube-api-access-g5pqv\") pod \"service-ca-operator-777779d784-qdjg7\" (UID: \"41dcccb8-0f23-4caa-8a48-447e043571de\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.872422 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2frh\" (UniqueName: \"kubernetes.io/projected/d19f4f47-257a-4269-96f3-e8892c939e0b-kube-api-access-x2frh\") pod \"collect-profiles-29406045-99lc6\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.890039 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znlxn\" (UniqueName: \"kubernetes.io/projected/b4c4d1b6-8ab9-48e0-b59d-eb863f02887e-kube-api-access-znlxn\") pod \"machine-config-controller-84d6567774-9rnxv\" (UID: \"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.918666 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7ph\" (UniqueName: \"kubernetes.io/projected/b8ff6724-d919-4bbf-87c6-3b521739d1a2-kube-api-access-vf7ph\") pod \"service-ca-9c57cc56f-66ztx\" (UID: \"b8ff6724-d919-4bbf-87c6-3b521739d1a2\") " pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.927093 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd8123ea-935f-4537-a8ca-83107de89a7e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7fxs8\" (UID: \"cd8123ea-935f-4537-a8ca-83107de89a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.945060 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.947433 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmpsz\" (UniqueName: \"kubernetes.io/projected/ecb3c993-1aab-4223-9efb-363b35b45e24-kube-api-access-cmpsz\") pod \"marketplace-operator-79b997595-bm8t5\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.951219 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.964011 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.967576 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9f6c\" (UniqueName: \"kubernetes.io/projected/345ee1b6-acbf-424c-bb00-f7545f4393ad-kube-api-access-m9f6c\") pod \"packageserver-d55dfcdfc-p884p\" (UID: \"345ee1b6-acbf-424c-bb00-f7545f4393ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:44 crc kubenswrapper[4957]: I1128 20:51:44.992072 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrgm\" (UniqueName: \"kubernetes.io/projected/07043640-1d98-4100-914f-ace6faae73d7-kube-api-access-jtrgm\") pod \"machine-config-server-6jcwn\" (UID: \"07043640-1d98-4100-914f-ace6faae73d7\") " pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.006919 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z7rh\" (UniqueName: \"kubernetes.io/projected/6b39d147-f628-444c-9333-37b05318296e-kube-api-access-4z7rh\") pod \"multus-admission-controller-857f4d67dd-g8h7t\" (UID: \"6b39d147-f628-444c-9333-37b05318296e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.022892 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.030157 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.043588 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.046463 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.047992 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qzq\" (UniqueName: \"kubernetes.io/projected/d4582bfc-0ce9-4859-a91e-ef9b41b775e4-kube-api-access-q9qzq\") pod \"catalog-operator-68c6474976-dqf9w\" (UID: \"d4582bfc-0ce9-4859-a91e-ef9b41b775e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.050375 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgjvf\" (UniqueName: \"kubernetes.io/projected/54dfe258-d553-48ff-b47d-dead72eb7646-kube-api-access-dgjvf\") pod \"machine-config-operator-74547568cd-f5946\" (UID: \"54dfe258-d553-48ff-b47d-dead72eb7646\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.064822 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.067693 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4r8r\" (UniqueName: \"kubernetes.io/projected/67aafc66-e89d-468e-b26c-c6cd8c842020-kube-api-access-t4r8r\") pod \"router-default-5444994796-t54js\" (UID: \"67aafc66-e89d-468e-b26c-c6cd8c842020\") " pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.088706 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrgvd\" (UniqueName: \"kubernetes.io/projected/2ff64fb3-2997-4711-97af-97a674dd4424-kube-api-access-mrgvd\") pod \"package-server-manager-789f6589d5-8lgvk\" (UID: \"2ff64fb3-2997-4711-97af-97a674dd4424\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.102528 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.107367 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.108455 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfl5m\" (UniqueName: \"kubernetes.io/projected/1857eeec-a0e1-463e-a77d-a41da08f2b3e-kube-api-access-pfl5m\") pod \"etcd-operator-b45778765-bkd2s\" (UID: \"1857eeec-a0e1-463e-a77d-a41da08f2b3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.114259 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.120734 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.126995 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szsfr\" (UniqueName: \"kubernetes.io/projected/f667aadc-3176-4462-a4e1-38d6d8222d47-kube-api-access-szsfr\") pod \"openshift-controller-manager-operator-756b6f6bc6-6pfc5\" (UID: \"f667aadc-3176-4462-a4e1-38d6d8222d47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.134943 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.140618 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.146655 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tfpv\" (UniqueName: \"kubernetes.io/projected/f1ea6588-c958-4c94-8c43-d4576d12c1d0-kube-api-access-9tfpv\") pod \"olm-operator-6b444d44fb-pcn5k\" (UID: \"f1ea6588-c958-4c94-8c43-d4576d12c1d0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.146935 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.153794 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.160811 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.167846 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svmj\" (UniqueName: \"kubernetes.io/projected/9d80547a-482d-49a2-9363-616e21af8403-kube-api-access-8svmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-jpxxk\" (UID: \"9d80547a-482d-49a2-9363-616e21af8403\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.273808 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.274082 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6jcwn" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.275398 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.275451 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a876c4a2-51d7-4d80-a6f1-9111850bf727-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.275545 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-certificates\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.275713 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-tls\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.276807 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.277285 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:45.77726955 +0000 UTC m=+145.245917459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.351663 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.359111 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.376956 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.377044 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:45.877028258 +0000 UTC m=+145.345676157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/177bd2d3-3f98-43d3-93ff-5788659ad6da-trusted-ca\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377323 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzgbs\" (UniqueName: \"kubernetes.io/projected/177bd2d3-3f98-43d3-93ff-5788659ad6da-kube-api-access-qzgbs\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377349 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-certificates\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377366 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-trusted-ca\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377420 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjqnb\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-kube-api-access-vjqnb\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377458 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2rb\" (UniqueName: \"kubernetes.io/projected/1b58440f-a51e-4d22-beaa-f9a5fc5a69c6-kube-api-access-4l2rb\") pod \"migrator-59844c95c7-tgnjn\" (UID: \"1b58440f-a51e-4d22-beaa-f9a5fc5a69c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-bound-sa-token\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377516 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a876c4a2-51d7-4d80-a6f1-9111850bf727-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377532 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/177bd2d3-3f98-43d3-93ff-5788659ad6da-bound-sa-token\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377550 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-tls\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a876c4a2-51d7-4d80-a6f1-9111850bf727-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.377616 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177bd2d3-3f98-43d3-93ff-5788659ad6da-metrics-tls\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.378637 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:45.878626173 +0000 UTC m=+145.347274082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.379050 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a876c4a2-51d7-4d80-a6f1-9111850bf727-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.380762 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-certificates\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.390835 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-tls\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.478244 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-trusted-ca\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480457 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-registration-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/251de81b-bec9-441f-a17f-77269fbfb233-metrics-tls\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480623 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-csi-data-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.480722 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:45.98069573 +0000 UTC m=+145.449343719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480801 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251de81b-bec9-441f-a17f-77269fbfb233-config-volume\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480916 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsds\" (UniqueName: \"kubernetes.io/projected/a29f65c5-800f-49f0-91e5-608c99574879-kube-api-access-jbsds\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.480977 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-mountpoint-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481040 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjqnb\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-kube-api-access-vjqnb\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481171 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l2rb\" (UniqueName: \"kubernetes.io/projected/1b58440f-a51e-4d22-beaa-f9a5fc5a69c6-kube-api-access-4l2rb\") pod \"migrator-59844c95c7-tgnjn\" (UID: \"1b58440f-a51e-4d22-beaa-f9a5fc5a69c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481197 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhmh\" (UniqueName: \"kubernetes.io/projected/b18c2b05-60d9-4f6b-ae61-2706b4cec752-kube-api-access-lxhmh\") pod \"ingress-canary-h4fjb\" (UID: \"b18c2b05-60d9-4f6b-ae61-2706b4cec752\") " pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481393 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-bound-sa-token\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481426 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a876c4a2-51d7-4d80-a6f1-9111850bf727-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481452 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/177bd2d3-3f98-43d3-93ff-5788659ad6da-bound-sa-token\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481605 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481634 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnlj\" (UniqueName: \"kubernetes.io/projected/251de81b-bec9-441f-a17f-77269fbfb233-kube-api-access-dwnlj\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481684 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177bd2d3-3f98-43d3-93ff-5788659ad6da-metrics-tls\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481720 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-socket-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481909 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-plugins-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.481936 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b18c2b05-60d9-4f6b-ae61-2706b4cec752-cert\") pod \"ingress-canary-h4fjb\" (UID: \"b18c2b05-60d9-4f6b-ae61-2706b4cec752\") " pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.482005 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/177bd2d3-3f98-43d3-93ff-5788659ad6da-trusted-ca\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.482090 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzgbs\" (UniqueName: \"kubernetes.io/projected/177bd2d3-3f98-43d3-93ff-5788659ad6da-kube-api-access-qzgbs\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.484418 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-trusted-ca\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.485364 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:45.985351581 +0000 UTC m=+145.453999600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.488306 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/177bd2d3-3f98-43d3-93ff-5788659ad6da-trusted-ca\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.490619 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a876c4a2-51d7-4d80-a6f1-9111850bf727-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.501261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/177bd2d3-3f98-43d3-93ff-5788659ad6da-metrics-tls\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.509099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzgbs\" (UniqueName: \"kubernetes.io/projected/177bd2d3-3f98-43d3-93ff-5788659ad6da-kube-api-access-qzgbs\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.534122 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjqnb\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-kube-api-access-vjqnb\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.537008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6jcwn" event={"ID":"07043640-1d98-4100-914f-ace6faae73d7","Type":"ContainerStarted","Data":"7b62b53cc3bf929368ce1e69876172d8ff4b1a1df5afebcc0f78f00fc5202790"} Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.538938 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" event={"ID":"51349199-ef14-46c8-9511-c14d14305b77","Type":"ContainerStarted","Data":"a7075090a61f6668bb1d965de3be7b9f29fcb85554f61b946c65189139de7da7"} Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.558729 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l2rb\" (UniqueName: \"kubernetes.io/projected/1b58440f-a51e-4d22-beaa-f9a5fc5a69c6-kube-api-access-4l2rb\") pod \"migrator-59844c95c7-tgnjn\" (UID: \"1b58440f-a51e-4d22-beaa-f9a5fc5a69c6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.580715 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-bound-sa-token\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583257 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583476 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhmh\" (UniqueName: \"kubernetes.io/projected/b18c2b05-60d9-4f6b-ae61-2706b4cec752-kube-api-access-lxhmh\") pod \"ingress-canary-h4fjb\" (UID: \"b18c2b05-60d9-4f6b-ae61-2706b4cec752\") " pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583540 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwnlj\" (UniqueName: \"kubernetes.io/projected/251de81b-bec9-441f-a17f-77269fbfb233-kube-api-access-dwnlj\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-socket-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-plugins-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583603 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b18c2b05-60d9-4f6b-ae61-2706b4cec752-cert\") pod \"ingress-canary-h4fjb\" (UID: \"b18c2b05-60d9-4f6b-ae61-2706b4cec752\") " pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583645 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-registration-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583669 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/251de81b-bec9-441f-a17f-77269fbfb233-metrics-tls\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583688 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-csi-data-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583708 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251de81b-bec9-441f-a17f-77269fbfb233-config-volume\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583728 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsds\" (UniqueName: \"kubernetes.io/projected/a29f65c5-800f-49f0-91e5-608c99574879-kube-api-access-jbsds\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583746 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-mountpoint-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.583830 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-mountpoint-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.583887 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.083873146 +0000 UTC m=+145.552521055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.584196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-registration-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.584456 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-socket-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.584502 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-plugins-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.587599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b18c2b05-60d9-4f6b-ae61-2706b4cec752-cert\") pod \"ingress-canary-h4fjb\" (UID: \"b18c2b05-60d9-4f6b-ae61-2706b4cec752\") " pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.588174 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/251de81b-bec9-441f-a17f-77269fbfb233-config-volume\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.588276 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a29f65c5-800f-49f0-91e5-608c99574879-csi-data-dir\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.591697 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/251de81b-bec9-441f-a17f-77269fbfb233-metrics-tls\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.605184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/177bd2d3-3f98-43d3-93ff-5788659ad6da-bound-sa-token\") pod \"ingress-operator-5b745b69d9-27cqh\" (UID: \"177bd2d3-3f98-43d3-93ff-5788659ad6da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.628823 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhmh\" (UniqueName: \"kubernetes.io/projected/b18c2b05-60d9-4f6b-ae61-2706b4cec752-kube-api-access-lxhmh\") pod \"ingress-canary-h4fjb\" (UID: \"b18c2b05-60d9-4f6b-ae61-2706b4cec752\") " pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.657159 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwnlj\" (UniqueName: \"kubernetes.io/projected/251de81b-bec9-441f-a17f-77269fbfb233-kube-api-access-dwnlj\") pod \"dns-default-lw4b4\" (UID: \"251de81b-bec9-441f-a17f-77269fbfb233\") " pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.670252 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsds\" (UniqueName: \"kubernetes.io/projected/a29f65c5-800f-49f0-91e5-608c99574879-kube-api-access-jbsds\") pod \"csi-hostpathplugin-r5xmw\" (UID: \"a29f65c5-800f-49f0-91e5-608c99574879\") " pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.689622 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.689936 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.18992343 +0000 UTC m=+145.658571339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.728590 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.768750 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.783405 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f229v"] Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.790324 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.790557 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.290522556 +0000 UTC m=+145.759170475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.790600 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6p7fc"] Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.790954 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.792926 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.292911609 +0000 UTC m=+145.761559588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.797741 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4"] Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.809394 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz"] Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.809502 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.820805 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.824834 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h4fjb" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.831264 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.896047 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.896240 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.396198338 +0000 UTC m=+145.864846247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.896852 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.897347 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.397336597 +0000 UTC m=+145.865984586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:45 crc kubenswrapper[4957]: I1128 20:51:45.997478 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:45 crc kubenswrapper[4957]: E1128 20:51:45.997855 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.49780923 +0000 UTC m=+145.966457139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.070911 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v6sdf"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.074673 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hg84w"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.098583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.098945 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.598932044 +0000 UTC m=+146.067579963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.199833 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.199959 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.699940795 +0000 UTC m=+146.168588704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.200370 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.200673 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.70066295 +0000 UTC m=+146.169310859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.303585 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.803562756 +0000 UTC m=+146.272210665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.303622 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.304060 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.304353 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.804345273 +0000 UTC m=+146.272993182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.404945 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.405128 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.905106185 +0000 UTC m=+146.373754094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.405194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.405608 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:46.905599952 +0000 UTC m=+146.374247861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.506790 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.506975 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.006950564 +0000 UTC m=+146.475598473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.507204 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.507655 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.007636558 +0000 UTC m=+146.476284537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.550944 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" event={"ID":"b36a4b12-b069-4dc4-a503-936aae20d06e","Type":"ContainerStarted","Data":"bd3619d80cd3026e27f7db9b39ac8ab1e95e3689a3a5c2fb45bbd6a081ab1519"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.550982 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" event={"ID":"b36a4b12-b069-4dc4-a503-936aae20d06e","Type":"ContainerStarted","Data":"011fe2015de72fde5e6b70a548f86ac5fee7052a44ddfe102327079a17733205"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.552134 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" event={"ID":"6dd906bf-c431-4243-96ef-236ed368bf11","Type":"ContainerStarted","Data":"7f7c6633832c138894c81d1e4908f6f35a9361290ae670f2ff3c84f5c5cec52b"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.555810 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t54js" event={"ID":"67aafc66-e89d-468e-b26c-c6cd8c842020","Type":"ContainerStarted","Data":"de92840dce6c7af218cf3a32c121cd802dbaac64f353306de953b3155f3c9cc5"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.555853 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t54js" event={"ID":"67aafc66-e89d-468e-b26c-c6cd8c842020","Type":"ContainerStarted","Data":"dbd8a26df2bda4e06aa8b670486df29c6ec1411d602c2582a430c2cbd6350b75"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.588618 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6p7fc" event={"ID":"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc","Type":"ContainerStarted","Data":"1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.588668 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6p7fc" event={"ID":"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc","Type":"ContainerStarted","Data":"c5cb7ec476f11e7c7e3f9b42cc9555737a7f4f9a35337256230181743be3d9ee"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.607376 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6jcwn" event={"ID":"07043640-1d98-4100-914f-ace6faae73d7","Type":"ContainerStarted","Data":"c8949e7fb6f0986220c7dbece48a9ded6dde2de580156afdbf8d3cb93c47e240"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.608183 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.616385 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.116357815 +0000 UTC m=+146.585005724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.626503 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9418283-90eb-4525-977d-296f994539fd" containerID="987012f4b4d195640174d835f4a0e194af25c8d41f404ca964a17d8604aa095c" exitCode=0 Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.626578 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" event={"ID":"f9418283-90eb-4525-977d-296f994539fd","Type":"ContainerDied","Data":"987012f4b4d195640174d835f4a0e194af25c8d41f404ca964a17d8604aa095c"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.626604 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" event={"ID":"f9418283-90eb-4525-977d-296f994539fd","Type":"ContainerStarted","Data":"5d094474b9354a540f4ea141c745b2ff42680466394da34f8c82e2ea4f35e7db"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.632202 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" event={"ID":"eff2527a-b897-47e0-92ac-f9319119ee43","Type":"ContainerStarted","Data":"31f3f98422433f50ab9761a9984ba64712842e5fc8e13a032788508c1a53eb3d"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.633100 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.636960 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" event={"ID":"51349199-ef14-46c8-9511-c14d14305b77","Type":"ContainerStarted","Data":"7d81d79ebfda010ce65a3776526492585be38beb292e9d9dc3b529fc2d256cf3"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.636996 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" event={"ID":"51349199-ef14-46c8-9511-c14d14305b77","Type":"ContainerStarted","Data":"2caf4578383bbd71c3b2385dc2d5d8bb0c3655416a6683b509676e6a28bbdefb"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.642823 4957 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hg84w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.642871 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" podUID="eff2527a-b897-47e0-92ac-f9319119ee43" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.647728 4957 generic.go:334] "Generic (PLEG): container finished" podID="74828378-0762-464d-b1c5-bda879361119" containerID="83afe1a889d80aef69e4b8d48135990806ac1e9e1c4c576fdc4addf5ef478982" exitCode=0 Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.647782 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" event={"ID":"74828378-0762-464d-b1c5-bda879361119","Type":"ContainerDied","Data":"83afe1a889d80aef69e4b8d48135990806ac1e9e1c4c576fdc4addf5ef478982"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.647812 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" event={"ID":"74828378-0762-464d-b1c5-bda879361119","Type":"ContainerStarted","Data":"387d9fa63b2fba7ccef2c63285329e136d3b8a9da35e7f7e86ab6c8bc366b4d8"} Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.710669 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.712614 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.212599841 +0000 UTC m=+146.681247750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.760160 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.762467 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bm8t5"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.767799 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-khfrs"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.771731 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.776941 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f5946"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.785468 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8bq66"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.811731 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.811984 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.311971215 +0000 UTC m=+146.780619124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.834881 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7"] Nov 28 20:51:46 crc kubenswrapper[4957]: I1128 20:51:46.916686 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:46 crc kubenswrapper[4957]: E1128 20:51:46.917086 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.417073277 +0000 UTC m=+146.885721186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.007864 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6jcwn" podStartSLOduration=5.007845224 podStartE2EDuration="5.007845224s" podCreationTimestamp="2025-11-28 20:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:46.968110421 +0000 UTC m=+146.436758330" watchObservedRunningTime="2025-11-28 20:51:47.007845224 +0000 UTC m=+146.476493133" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.018884 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.019227 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.519199406 +0000 UTC m=+146.987847315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.049474 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6p7fc" podStartSLOduration=128.049456762 podStartE2EDuration="2m8.049456762s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:47.008685363 +0000 UTC m=+146.477333272" watchObservedRunningTime="2025-11-28 20:51:47.049456762 +0000 UTC m=+146.518104671" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.098982 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-t54js" podStartSLOduration=127.098966183 podStartE2EDuration="2m7.098966183s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:47.089804946 +0000 UTC m=+146.558452855" watchObservedRunningTime="2025-11-28 20:51:47.098966183 +0000 UTC m=+146.567614092" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.101157 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.104275 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.122900 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.123222 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.62319731 +0000 UTC m=+147.091845209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.148801 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xs7kj"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.154173 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.170005 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b2stl"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.210670 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.225442 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.225726 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.725712083 +0000 UTC m=+147.194359992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.237402 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.299011 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" podStartSLOduration=127.298996816 podStartE2EDuration="2m7.298996816s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:47.258685652 +0000 UTC m=+146.727333561" watchObservedRunningTime="2025-11-28 20:51:47.298996816 +0000 UTC m=+146.767644725" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.302794 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.310577 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.319179 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkd2s"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.324450 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gnnhv" podStartSLOduration=128.324436475 podStartE2EDuration="2m8.324436475s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:47.321081349 +0000 UTC m=+146.789729258" watchObservedRunningTime="2025-11-28 20:51:47.324436475 +0000 UTC m=+146.793084384" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.328283 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.328549 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.828539826 +0000 UTC m=+147.297187735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.348364 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g8h7t"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.352041 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.352102 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.354909 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.358066 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9529v"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.360468 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:47 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:47 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:47 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.360537 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:47 crc kubenswrapper[4957]: W1128 20:51:47.378006 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8123ea_935f_4537_a8ca_83107de89a7e.slice/crio-c59e3c22d22d1f8780609bdb1c2f1e6c3ca03b8de26572b65f6fea3536c454d4 WatchSource:0}: Error finding container c59e3c22d22d1f8780609bdb1c2f1e6c3ca03b8de26572b65f6fea3536c454d4: Status 404 returned error can't find the container with id c59e3c22d22d1f8780609bdb1c2f1e6c3ca03b8de26572b65f6fea3536c454d4 Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.404947 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.430575 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.431183 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:47.931144992 +0000 UTC m=+147.399792901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.437838 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.493498 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-66ztx"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.509925 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.511066 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h4fjb"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.512852 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lw4b4"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.514401 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.516054 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.534357 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.534679 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.0346654 +0000 UTC m=+147.503313309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.601408 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.623672 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.635960 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.636277 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.136257621 +0000 UTC m=+147.604905530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.636616 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.636824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.636910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.637189 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.137181412 +0000 UTC m=+147.605829321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.649972 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r5xmw"] Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.665890 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.666394 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.707478 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h4fjb" event={"ID":"b18c2b05-60d9-4f6b-ae61-2706b4cec752","Type":"ContainerStarted","Data":"0c1d470a7a630316f55261dcef146ae8692e03835ce8a7f63f9c822ce3dcf143"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.712635 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" event={"ID":"2ff64fb3-2997-4711-97af-97a674dd4424","Type":"ContainerStarted","Data":"b087ff6fae0b6e775b473cb4ad5b3cc0400e7a6ebb043fb593a7945e6b6e441c"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.712685 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" event={"ID":"2ff64fb3-2997-4711-97af-97a674dd4424","Type":"ContainerStarted","Data":"1fec589de4d76c49bc3c61e4ff60a5fcbef5e1a797cb27b80e2f0e7d594fc8a3"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.738481 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" event={"ID":"f667aadc-3176-4462-a4e1-38d6d8222d47","Type":"ContainerStarted","Data":"cf7f69f40f0bfc3ce9ea834473cc9d4cb247e42c69fbbdf57892e93574936deb"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.740421 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.740707 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.740845 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.240822994 +0000 UTC m=+147.709470903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.740891 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.740932 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.741310 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.24129528 +0000 UTC m=+147.709943189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.752869 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.778553 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" event={"ID":"f84ca592-04c4-4edf-a398-0f879254007f","Type":"ContainerStarted","Data":"27b6f50fba8461a80a5af5d2bf417a4f9045a81b3e4306b1f9a2517240090210"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.807457 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" event={"ID":"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e","Type":"ContainerStarted","Data":"ef179e18b761d4ed9166c69d5366739c61c6d88a9ad45e467a1bac1b18734979"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.832754 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xs7kj" event={"ID":"80f38516-fdd0-42ed-855f-7f4f01a98786","Type":"ContainerStarted","Data":"8d68637cd50fde641670c5dbf8ee56e31bf5126ecfb1943d3d15abb9f1aa350a"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.833133 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xs7kj" event={"ID":"80f38516-fdd0-42ed-855f-7f4f01a98786","Type":"ContainerStarted","Data":"24c6581331c778f29f4a2ebbc3a7a61c9bc826651c744d97aa2c1f2b1f7890ac"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.833155 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.843454 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.843721 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.844273 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.344251388 +0000 UTC m=+147.812899347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.847294 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.848493 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-xs7kj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.848543 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xs7kj" podUID="80f38516-fdd0-42ed-855f-7f4f01a98786" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.848750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" event={"ID":"d19f4f47-257a-4269-96f3-e8892c939e0b","Type":"ContainerStarted","Data":"3d582c1887e5e71964ccb75236335a14eb816a4bf0a390fc808e792ee2ccc17b"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.866167 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" event={"ID":"ecb3c993-1aab-4223-9efb-363b35b45e24","Type":"ContainerStarted","Data":"ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.866226 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" event={"ID":"ecb3c993-1aab-4223-9efb-363b35b45e24","Type":"ContainerStarted","Data":"bf33a029d95bf0653ce41ff8bd9134cf30cf88ebd559b611654529bd2f29e759"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.866975 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.870332 4957 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bm8t5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.870376 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" podUID="ecb3c993-1aab-4223-9efb-363b35b45e24" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.880871 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" event={"ID":"ea21530c-53e8-469d-bd38-997357f9b970","Type":"ContainerStarted","Data":"47be0021b0fd3144def96237e92f44dc9da7c5128c3b3b0e3665f83bd437ed0f"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.880922 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" event={"ID":"ea21530c-53e8-469d-bd38-997357f9b970","Type":"ContainerStarted","Data":"a88324626037d948ecffd3bf23fa3d17d5c73aa238d460f3ff2480e76336bd24"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.915560 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" podStartSLOduration=127.915544422 podStartE2EDuration="2m7.915544422s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:47.914358571 +0000 UTC m=+147.383006480" watchObservedRunningTime="2025-11-28 20:51:47.915544422 +0000 UTC m=+147.384192331" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.915695 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xs7kj" podStartSLOduration=128.915689877 podStartE2EDuration="2m8.915689877s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:47.862138226 +0000 UTC m=+147.330786135" watchObservedRunningTime="2025-11-28 20:51:47.915689877 +0000 UTC m=+147.384337786" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.922359 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" event={"ID":"debcc187-997b-4ff2-ae1b-0a187aba449f","Type":"ContainerStarted","Data":"28106a5a848d5d8f3874cea91d1b6c0e2f2559dcb46e3f85a339018bace23b7b"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.922400 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" event={"ID":"debcc187-997b-4ff2-ae1b-0a187aba449f","Type":"ContainerStarted","Data":"1988bbf392d65742dc40808ac93e2aaca8aec96b5478f1ccad81541c417205ce"} Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.932692 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.944923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:47 crc kubenswrapper[4957]: E1128 20:51:47.945240 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.445226888 +0000 UTC m=+147.913874797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:47 crc kubenswrapper[4957]: I1128 20:51:47.971245 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" event={"ID":"1857eeec-a0e1-463e-a77d-a41da08f2b3e","Type":"ContainerStarted","Data":"116ff07eff18c719773d392f071e6fd9ff2e99532ec74b936363145bc18658b5"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.007946 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" event={"ID":"345ee1b6-acbf-424c-bb00-f7545f4393ad","Type":"ContainerStarted","Data":"39cc0e67d1ac9356b11ba439b9a0dd077b455b15bbe85a2bfcf720bc6639e8e9"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.007992 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" event={"ID":"345ee1b6-acbf-424c-bb00-f7545f4393ad","Type":"ContainerStarted","Data":"21785d0da8acd6b1e3bce90b8819b7edddeeb0c1021c92f959208fff7f411e68"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.008991 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.030566 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" event={"ID":"f1ea6588-c958-4c94-8c43-d4576d12c1d0","Type":"ContainerStarted","Data":"320b8b0cc13094a770996280122e14859f27f2b8a87447fef5c044349442fd9a"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.031369 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.044605 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" event={"ID":"d4582bfc-0ce9-4859-a91e-ef9b41b775e4","Type":"ContainerStarted","Data":"1c0351c8f92f5c9ee6d7ea9486c3634b686efa7bcdfccad0d295da5f4a9199fa"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.045769 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.046854 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.546836319 +0000 UTC m=+148.015484238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.058537 4957 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pcn5k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.058585 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" podUID="f1ea6588-c958-4c94-8c43-d4576d12c1d0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.068369 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.078787 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" podStartSLOduration=128.078770453 podStartE2EDuration="2m8.078770453s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.078470512 +0000 UTC m=+147.547118421" watchObservedRunningTime="2025-11-28 20:51:48.078770453 +0000 UTC m=+147.547418362" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.079082 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" podStartSLOduration=128.079077583 podStartE2EDuration="2m8.079077583s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.044319802 +0000 UTC m=+147.512967701" watchObservedRunningTime="2025-11-28 20:51:48.079077583 +0000 UTC m=+147.547725492" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.080415 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" event={"ID":"27a7baa1-a66c-4c13-be52-2a401578c92d","Type":"ContainerStarted","Data":"1bba8d0aa207ed8944a272fc10ef2a272da69ad75916e043e0428c56cc393bfd"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.118321 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" event={"ID":"6dc37571-86e4-4d8c-bc0f-97c53da56e4f","Type":"ContainerStarted","Data":"7a3029059e0df53d94bcae468f54c6be935a8981316d8890fdfd2a70ccde0975"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.149639 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.149704 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.649688984 +0000 UTC m=+148.118336893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.154830 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" event={"ID":"74828378-0762-464d-b1c5-bda879361119","Type":"ContainerStarted","Data":"0cc91b837217330fc89172c16ab4758e41d945dabb61d4f9da7ed1ad96aca8f0"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.155975 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.159783 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" podStartSLOduration=129.159757511 podStartE2EDuration="2m9.159757511s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.149549319 +0000 UTC m=+147.618197228" watchObservedRunningTime="2025-11-28 20:51:48.159757511 +0000 UTC m=+147.628405420" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.203704 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" event={"ID":"1b58440f-a51e-4d22-beaa-f9a5fc5a69c6","Type":"ContainerStarted","Data":"b16961d9aa118c4b4b80d74bb8d6042db33250d6e44af33440c9870ff037fbd1"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.246284 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" event={"ID":"f9418283-90eb-4525-977d-296f994539fd","Type":"ContainerStarted","Data":"5c54ae947832886f2f3bcdd452f830b9d5b62c75030d7b8b2e4436aa328c4a8f"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.261731 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.262557 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.762530723 +0000 UTC m=+148.231178632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.272237 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" event={"ID":"fec5d988-b5a3-4aa6-90a5-d62e31b8276b","Type":"ContainerStarted","Data":"efa74f72aaca708f1313ccb92a41148ada447df9214a7652f6f43d6270ba358b"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.282412 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" podStartSLOduration=129.282393129 podStartE2EDuration="2m9.282393129s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.189088845 +0000 UTC m=+147.657736744" watchObservedRunningTime="2025-11-28 20:51:48.282393129 +0000 UTC m=+147.751041028" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.311739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" event={"ID":"eff2527a-b897-47e0-92ac-f9319119ee43","Type":"ContainerStarted","Data":"e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.332730 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.386039 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:48 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:48 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:48 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.386120 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.390308 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" podStartSLOduration=128.390287238 podStartE2EDuration="2m8.390287238s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.283534559 +0000 UTC m=+147.752182468" watchObservedRunningTime="2025-11-28 20:51:48.390287238 +0000 UTC m=+147.858935147" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.395650 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.402074 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:48.902054845 +0000 UTC m=+148.370702754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.421654 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" event={"ID":"b36a4b12-b069-4dc4-a503-936aae20d06e","Type":"ContainerStarted","Data":"d955e0523a34368c090e8ce190d15fe5635be90c7bce922603a6346aa7e2c4b4"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.455703 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p884p" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.499949 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" event={"ID":"9d80547a-482d-49a2-9363-616e21af8403","Type":"ContainerStarted","Data":"3eff26474cd74476450a847e689050138e13ae66019d8d675815a8113f7ee8a8"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.500772 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.502396 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.002373711 +0000 UTC m=+148.471021620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.511480 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-f229v" podStartSLOduration=128.511463196 podStartE2EDuration="2m8.511463196s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.497376379 +0000 UTC m=+147.966024298" watchObservedRunningTime="2025-11-28 20:51:48.511463196 +0000 UTC m=+147.980111105" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.523072 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" event={"ID":"6dd906bf-c431-4243-96ef-236ed368bf11","Type":"ContainerStarted","Data":"6146a60245e5385058f2682e2dd48a9fe33a42f6b1ccfff52a35c4d03087dd88"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.523931 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.547768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" event={"ID":"8d1df251-3903-4430-a03a-792c9a01051e","Type":"ContainerStarted","Data":"18a0396f45d8c9a6de113cb3a330b9ff46311406665328d5637d90865058b0b3"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.547816 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" event={"ID":"8d1df251-3903-4430-a03a-792c9a01051e","Type":"ContainerStarted","Data":"0e1dd89fdaae538ab48d6899eb5c351c07df9eb0f365285eddbb24d37f306832"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.562736 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lw4b4" event={"ID":"251de81b-bec9-441f-a17f-77269fbfb233","Type":"ContainerStarted","Data":"598cb7886e410ccb19cdb9678089b43172f706518313eb1bfcf937543ac5e323"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.563097 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.574545 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" event={"ID":"a145189a-74bb-4100-8ba3-52aa988e0163","Type":"ContainerStarted","Data":"a29c68363e5b3cea1312e62d3fb141b44b7f18223987cde146ef1d1577ae6136"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.580635 4957 generic.go:334] "Generic (PLEG): container finished" podID="70eb3caa-0533-4620-9dc5-4e5b9c4581bc" containerID="85d442128efd5533040e8274b8bfdc69d210c7d9554e003fe39c34fd5a31b2b2" exitCode=0 Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.583342 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" event={"ID":"70eb3caa-0533-4620-9dc5-4e5b9c4581bc","Type":"ContainerDied","Data":"85d442128efd5533040e8274b8bfdc69d210c7d9554e003fe39c34fd5a31b2b2"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.583376 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" event={"ID":"70eb3caa-0533-4620-9dc5-4e5b9c4581bc","Type":"ContainerStarted","Data":"db45b5a5f464377265e68d834d90fea9006f600ddf203ad97bcbf554d5b8531d"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.602942 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.604129 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.104114197 +0000 UTC m=+148.572762106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.611369 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" event={"ID":"6b39d147-f628-444c-9333-37b05318296e","Type":"ContainerStarted","Data":"92504f78a0c17de5d72060a0c92df2fd463f595c96e105d6eee5419dfe03b774"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.628374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" event={"ID":"b8ff6724-d919-4bbf-87c6-3b521739d1a2","Type":"ContainerStarted","Data":"32e708975c53e6511e6f5363818a6a86c3f5dfb05846929b37895500ee55e239"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.657696 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" event={"ID":"41dcccb8-0f23-4caa-8a48-447e043571de","Type":"ContainerStarted","Data":"2a032022c469073bd3cc29b8dc48917135765acde4a8437ecb73e8fbe6982bc3"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.657737 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" event={"ID":"41dcccb8-0f23-4caa-8a48-447e043571de","Type":"ContainerStarted","Data":"836814c6eddc7748971f4c66e993165be0c4b678676edf26d6ec8d340a8d5b4a"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.668069 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b2stl" podStartSLOduration=129.668052357 podStartE2EDuration="2m9.668052357s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.600119589 +0000 UTC m=+148.068767498" watchObservedRunningTime="2025-11-28 20:51:48.668052357 +0000 UTC m=+148.136700266" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.668441 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-v6sdf" podStartSLOduration=129.66843611 podStartE2EDuration="2m9.66843611s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.661693337 +0000 UTC m=+148.130341246" watchObservedRunningTime="2025-11-28 20:51:48.66843611 +0000 UTC m=+148.137084019" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.684347 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" event={"ID":"8a1acf6a-47e9-482f-88e7-87d508ec3b4b","Type":"ContainerStarted","Data":"08e3af4c0bee3d849c72e61f45134ec42ebb0f73d360a1bbaf0daf1515f8c263"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.703787 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.703930 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.203908196 +0000 UTC m=+148.672556105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.704059 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.706282 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.206272588 +0000 UTC m=+148.674920497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.728365 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" event={"ID":"54dfe258-d553-48ff-b47d-dead72eb7646","Type":"ContainerStarted","Data":"2a0bd21927ec144b6ebaaced8797b42f14b11b13efdf7c30aa9a79e609286357"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.728636 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" event={"ID":"54dfe258-d553-48ff-b47d-dead72eb7646","Type":"ContainerStarted","Data":"f5be4c35b7d0984c3aa45e08d39ec7afd23cf482aa3994fa934b0b5b8da30409"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.728646 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" event={"ID":"54dfe258-d553-48ff-b47d-dead72eb7646","Type":"ContainerStarted","Data":"19280b5a943820e62a0cc2aae40d233ef7751274eb0ea942ec32e33f998913e0"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.750475 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdjg7" podStartSLOduration=128.750456435 podStartE2EDuration="2m8.750456435s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.703464801 +0000 UTC m=+148.172112710" watchObservedRunningTime="2025-11-28 20:51:48.750456435 +0000 UTC m=+148.219104344" Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.758012 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" event={"ID":"cd8123ea-935f-4537-a8ca-83107de89a7e","Type":"ContainerStarted","Data":"c59e3c22d22d1f8780609bdb1c2f1e6c3ca03b8de26572b65f6fea3536c454d4"} Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.808521 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.809653 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.30963944 +0000 UTC m=+148.778287349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:48 crc kubenswrapper[4957]: I1128 20:51:48.914117 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:48 crc kubenswrapper[4957]: E1128 20:51:48.914610 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.414598637 +0000 UTC m=+148.883246546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.014832 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.015141 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.515125451 +0000 UTC m=+148.983773360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.115952 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.116684 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.61667264 +0000 UTC m=+149.085320549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.221853 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.222396 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.722379572 +0000 UTC m=+149.191027481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.316305 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.316668 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.327095 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.327396 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.827385141 +0000 UTC m=+149.296033050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.354952 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:49 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:49 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:49 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.355239 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.357405 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.403454 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f5946" podStartSLOduration=129.403438169 podStartE2EDuration="2m9.403438169s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:48.78648646 +0000 UTC m=+148.255134369" watchObservedRunningTime="2025-11-28 20:51:49.403438169 +0000 UTC m=+148.872086078" Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.430799 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.431037 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:49.931003882 +0000 UTC m=+149.399651791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.534382 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.535066 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.035051898 +0000 UTC m=+149.503699817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.635779 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.636176 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.136158202 +0000 UTC m=+149.604806111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: W1128 20:51:49.719098 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-43aadfabbd22200bf29a200bb30d5b3f0cc364a9d2a5a7926b4abe5910ebec30 WatchSource:0}: Error finding container 43aadfabbd22200bf29a200bb30d5b3f0cc364a9d2a5a7926b4abe5910ebec30: Status 404 returned error can't find the container with id 43aadfabbd22200bf29a200bb30d5b3f0cc364a9d2a5a7926b4abe5910ebec30 Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.740091 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.740613 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.240595481 +0000 UTC m=+149.709243400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.841434 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.841754 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.341735856 +0000 UTC m=+149.810383765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.877484 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lw4b4" event={"ID":"251de81b-bec9-441f-a17f-77269fbfb233","Type":"ContainerStarted","Data":"0ae6861d24a7f503ce1fa7575b3f78e1747e2e1a0d7df2e8e53b4c14127e576d"} Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.929512 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" event={"ID":"debcc187-997b-4ff2-ae1b-0a187aba449f","Type":"ContainerStarted","Data":"7ce6abf143d622825a4c726214a9ddadbd8c34dde4f7d881721c16e39ec6c076"} Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.945325 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:49 crc kubenswrapper[4957]: E1128 20:51:49.945666 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.445653097 +0000 UTC m=+149.914300996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.951830 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" event={"ID":"fec5d988-b5a3-4aa6-90a5-d62e31b8276b","Type":"ContainerStarted","Data":"a294f2b19d2be18acd6830a9e2af8747e23dc970fa48ed6e407414bb2d7296e9"} Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.971157 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zsps7" podStartSLOduration=130.971137088 podStartE2EDuration="2m10.971137088s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:49.966042952 +0000 UTC m=+149.434690861" watchObservedRunningTime="2025-11-28 20:51:49.971137088 +0000 UTC m=+149.439784997" Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.984504 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" event={"ID":"8a1acf6a-47e9-482f-88e7-87d508ec3b4b","Type":"ContainerStarted","Data":"33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe"} Nov 28 20:51:49 crc kubenswrapper[4957]: I1128 20:51:49.986943 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.002556 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jqbl7" podStartSLOduration=131.002538563 podStartE2EDuration="2m11.002538563s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.000241133 +0000 UTC m=+149.468889062" watchObservedRunningTime="2025-11-28 20:51:50.002538563 +0000 UTC m=+149.471186472" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.007273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fc558b5319d1c208a6ca5534e52868b21fab1071ab36b5342bdfa95a3859337c"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.043165 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" event={"ID":"f1ea6588-c958-4c94-8c43-d4576d12c1d0","Type":"ContainerStarted","Data":"38b3531457da18c9b72fd7bdcb4347c6978035538ca2cbae906dac5a3b648b34"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.046682 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.047925 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.547909381 +0000 UTC m=+150.016557290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.053320 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" podStartSLOduration=131.053296407 podStartE2EDuration="2m11.053296407s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.050742279 +0000 UTC m=+149.519390188" watchObservedRunningTime="2025-11-28 20:51:50.053296407 +0000 UTC m=+149.521944316" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.096510 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" event={"ID":"d4582bfc-0ce9-4859-a91e-ef9b41b775e4","Type":"ContainerStarted","Data":"f5db35ad4d786950e2e12652fef4bc8ab296eb623c211f2d31e6a378057c42d7"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.097718 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.097955 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcn5k" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.101392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" event={"ID":"a145189a-74bb-4100-8ba3-52aa988e0163","Type":"ContainerStarted","Data":"41ffb17276ac5db39378c6b74a0c3190a79043ebe2684e82c62f42c5d3cc20df"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.120385 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" event={"ID":"a29f65c5-800f-49f0-91e5-608c99574879","Type":"ContainerStarted","Data":"d9bfec718af51e25994f37b0b238b2dca98dfda135db1d66fc55907c19c9abb8"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.132298 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" event={"ID":"9d80547a-482d-49a2-9363-616e21af8403","Type":"ContainerStarted","Data":"92295839bdd86a7d7d7fcec8cbfc23b24b21ab93aef8df2fc87b8bb50d347cb7"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.144760 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.150468 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" event={"ID":"58ddecc7-b52f-4879-b4cb-af8fb7069448","Type":"ContainerStarted","Data":"b6d6ebd3d8ab8278f55ddf578a34c33b00745f691329b0903fb447b51409c13c"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.150519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" event={"ID":"58ddecc7-b52f-4879-b4cb-af8fb7069448","Type":"ContainerStarted","Data":"5b76273140fd7f90815a1deada0def8cca145f867b2740c9d887cf02aa2378ef"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.156003 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.156298 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.656286626 +0000 UTC m=+150.124934525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.168316 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dqf9w" podStartSLOduration=130.168300861 podStartE2EDuration="2m10.168300861s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.131462848 +0000 UTC m=+149.600110757" watchObservedRunningTime="2025-11-28 20:51:50.168300861 +0000 UTC m=+149.636948770" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.171393 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" event={"ID":"177bd2d3-3f98-43d3-93ff-5788659ad6da","Type":"ContainerStarted","Data":"79d6cf39ca7d66f44e721e6335d2f22dfdb797c60fba9b81b7d6fc5597d491da"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.171441 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" event={"ID":"177bd2d3-3f98-43d3-93ff-5788659ad6da","Type":"ContainerStarted","Data":"a8d724da50f6db16618fe0294f6c80b1aeaeaa5886fa4fa2b37d58984f6e6aeb"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.191943 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" event={"ID":"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e","Type":"ContainerStarted","Data":"e3ef77f0ad47f760aeedc08a6089ef56cde41a3159ba20108c52e5599df5ab1e"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.191987 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" event={"ID":"b4c4d1b6-8ab9-48e0-b59d-eb863f02887e","Type":"ContainerStarted","Data":"e28d811fbf825af8ad845fa49c7da93b373dbe6145bb8c20bc51cc31b6ff38e0"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.223825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"43aadfabbd22200bf29a200bb30d5b3f0cc364a9d2a5a7926b4abe5910ebec30"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.224460 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.241086 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" event={"ID":"70eb3caa-0533-4620-9dc5-4e5b9c4581bc","Type":"ContainerStarted","Data":"9fe24714e14a2850d50f26633ae6473759b7046a55c60ba1534414070ecd59f6"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.245429 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" event={"ID":"1b58440f-a51e-4d22-beaa-f9a5fc5a69c6","Type":"ContainerStarted","Data":"de01fa2efec3acb2844920c0988bfd572669de9c0ab2b53882c686e096864208"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.245466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" event={"ID":"1b58440f-a51e-4d22-beaa-f9a5fc5a69c6","Type":"ContainerStarted","Data":"2cc3988e4e0da4a056b9d8295790a5b67dfb3f37b4b16b9d30d1ef5a2a88b6e1"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.256233 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h4fjb" event={"ID":"b18c2b05-60d9-4f6b-ae61-2706b4cec752","Type":"ContainerStarted","Data":"92eec57685d9508a9f18b12da0a4e797f5c076a231631277a15e13ad587bf8b4"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.256780 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.257613 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.757571146 +0000 UTC m=+150.226219055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.258422 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.259782 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.759770362 +0000 UTC m=+150.228418271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.268453 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.275447 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" event={"ID":"6b39d147-f628-444c-9333-37b05318296e","Type":"ContainerStarted","Data":"bf61e94caef186ef4e52da7dca2324ab4d4dbb451525ceae1584cfbedcbd8161"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.309317 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qgvqz" podStartSLOduration=130.309298724 podStartE2EDuration="2m10.309298724s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.208624145 +0000 UTC m=+149.677272054" watchObservedRunningTime="2025-11-28 20:51:50.309298724 +0000 UTC m=+149.777946633" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.309355 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.319374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" event={"ID":"f84ca592-04c4-4edf-a398-0f879254007f","Type":"ContainerStarted","Data":"22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.319898 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.332821 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.360402 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" event={"ID":"2ff64fb3-2997-4711-97af-97a674dd4424","Type":"ContainerStarted","Data":"8831c79aa6ac1bd37f901016cbb41253ff22e0af8de2a68c4c9cb3f02b1be642"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.361117 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.361282 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:50 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:50 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:50 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.361320 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.368384 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r26lm" podStartSLOduration=130.368370665 podStartE2EDuration="2m10.368370665s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.308796156 +0000 UTC m=+149.777444065" watchObservedRunningTime="2025-11-28 20:51:50.368370665 +0000 UTC m=+149.837018574" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.368924 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.369599 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.869573277 +0000 UTC m=+150.338221236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.369604 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jpxxk" podStartSLOduration=130.369597328 podStartE2EDuration="2m10.369597328s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.367837427 +0000 UTC m=+149.836485336" watchObservedRunningTime="2025-11-28 20:51:50.369597328 +0000 UTC m=+149.838245237" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.370201 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" event={"ID":"cd8123ea-935f-4537-a8ca-83107de89a7e","Type":"ContainerStarted","Data":"f9250512920bee7429273b0b2657b4c4b477292604936459a84864ec19f09988"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.394835 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" event={"ID":"b8ff6724-d919-4bbf-87c6-3b521739d1a2","Type":"ContainerStarted","Data":"48189588bfbe312a43eb5a5caacce4ef457bae5e82c511d65e11da4b133c25ad"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.397833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" event={"ID":"1857eeec-a0e1-463e-a77d-a41da08f2b3e","Type":"ContainerStarted","Data":"bf429387438bb3e29b9f1b19e20e6baa19655d1aa0fd425f6fc11b18ba7d7d6d"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.409746 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n7xjd" event={"ID":"6dc37571-86e4-4d8c-bc0f-97c53da56e4f","Type":"ContainerStarted","Data":"719689bf8ded487bf846d5b409fe89e999022117fb9f822a045fb3b6e3bb3090"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.441516 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" event={"ID":"d19f4f47-257a-4269-96f3-e8892c939e0b","Type":"ContainerStarted","Data":"d8a988f2d0674539ff0ec68ffa9b3f7fb21a767dd8ad81cba0c1fe76b607ecb4"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.455780 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f84141cbdc9e6c6710ecd39d8dc2de6b9cee416dbd195c484c1e65847a4068d8"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.455833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ad0eac7e8452005f3e2ffba587ba8544945ef54401fb7cf234a9ca2b26f9ea55"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.462818 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9rnxv" podStartSLOduration=130.462800138 podStartE2EDuration="2m10.462800138s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.458002283 +0000 UTC m=+149.926650192" watchObservedRunningTime="2025-11-28 20:51:50.462800138 +0000 UTC m=+149.931448047" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.472540 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.474406 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:50.974392059 +0000 UTC m=+150.443039968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.475241 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" event={"ID":"f667aadc-3176-4462-a4e1-38d6d8222d47","Type":"ContainerStarted","Data":"28619ba92f615c0d2cf12c98606ec68ff326959329e9684c72a09b82ce72e379"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.490279 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" event={"ID":"ea21530c-53e8-469d-bd38-997357f9b970","Type":"ContainerStarted","Data":"e52dfde5ce5f15684d70063b1635f66a87e5613d9141fa69c533384cd505e0d4"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.498610 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" event={"ID":"27a7baa1-a66c-4c13-be52-2a401578c92d","Type":"ContainerStarted","Data":"12140dbca1c1c676d33032ed64482368bf67b95d287183665cb7d4531926e0d3"} Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.501405 4957 patch_prober.go:28] interesting pod/downloads-7954f5f757-xs7kj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.501448 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xs7kj" podUID="80f38516-fdd0-42ed-855f-7f4f01a98786" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.513005 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.514943 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" podStartSLOduration=130.51493096 podStartE2EDuration="2m10.51493096s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.514469724 +0000 UTC m=+149.983117633" watchObservedRunningTime="2025-11-28 20:51:50.51493096 +0000 UTC m=+149.983578869" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.529552 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mhtxz" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.578685 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.580178 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.080162124 +0000 UTC m=+150.548810033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.627565 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" podStartSLOduration=130.627539421 podStartE2EDuration="2m10.627539421s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.618456528 +0000 UTC m=+150.087104437" watchObservedRunningTime="2025-11-28 20:51:50.627539421 +0000 UTC m=+150.096187330" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.683464 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.683940 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.18392767 +0000 UTC m=+150.652575579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.737030 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6pfc5" podStartSLOduration=131.737013295 podStartE2EDuration="2m11.737013295s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.686974735 +0000 UTC m=+150.155622644" watchObservedRunningTime="2025-11-28 20:51:50.737013295 +0000 UTC m=+150.205661204" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.738249 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-knngc"] Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.739453 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.745848 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.771906 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-knngc"] Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.786511 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.786883 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.286867838 +0000 UTC m=+150.755515747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.788177 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pj8vl" podStartSLOduration=130.788161432 podStartE2EDuration="2m10.788161432s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.782941282 +0000 UTC m=+150.251589191" watchObservedRunningTime="2025-11-28 20:51:50.788161432 +0000 UTC m=+150.256809341" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.892491 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-utilities\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.892555 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.892593 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28fh\" (UniqueName: \"kubernetes.io/projected/2a6ea13b-5dba-46d9-a947-3a08d376c195-kube-api-access-r28fh\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.892635 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-catalog-content\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.892900 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.392889141 +0000 UTC m=+150.861537050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.920953 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" podStartSLOduration=130.920938011 podStartE2EDuration="2m10.920938011s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.919715118 +0000 UTC m=+150.388363027" watchObservedRunningTime="2025-11-28 20:51:50.920938011 +0000 UTC m=+150.389585920" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.954530 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h4fjb" podStartSLOduration=8.954516861 podStartE2EDuration="8.954516861s" podCreationTimestamp="2025-11-28 20:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:50.953511876 +0000 UTC m=+150.422159775" watchObservedRunningTime="2025-11-28 20:51:50.954516861 +0000 UTC m=+150.423164770" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.965562 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gdnl"] Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.966566 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.973072 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.983136 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gdnl"] Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.993918 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.994075 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-catalog-content\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.994102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-utilities\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.994160 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r28fh\" (UniqueName: \"kubernetes.io/projected/2a6ea13b-5dba-46d9-a947-3a08d376c195-kube-api-access-r28fh\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: E1128 20:51:50.994512 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.494497393 +0000 UTC m=+150.963145302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.994925 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-catalog-content\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:50 crc kubenswrapper[4957]: I1128 20:51:50.995135 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-utilities\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.003616 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bkd2s" podStartSLOduration=131.003589827 podStartE2EDuration="2m11.003589827s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.000907634 +0000 UTC m=+150.469555553" watchObservedRunningTime="2025-11-28 20:51:51.003589827 +0000 UTC m=+150.472237736" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.040159 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r28fh\" (UniqueName: \"kubernetes.io/projected/2a6ea13b-5dba-46d9-a947-3a08d376c195-kube-api-access-r28fh\") pod \"community-operators-knngc\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.042590 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" podStartSLOduration=132.042573684 podStartE2EDuration="2m12.042573684s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.04186274 +0000 UTC m=+150.510510669" watchObservedRunningTime="2025-11-28 20:51:51.042573684 +0000 UTC m=+150.511221593" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.087589 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.095986 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.096050 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plnfs\" (UniqueName: \"kubernetes.io/projected/c093d27c-da80-4125-93fa-47a03d1082c5-kube-api-access-plnfs\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.096123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-utilities\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.096150 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-catalog-content\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.096479 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.596466887 +0000 UTC m=+151.065114796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.117548 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-66ztx" podStartSLOduration=131.117534135 podStartE2EDuration="2m11.117534135s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.115573277 +0000 UTC m=+150.584221176" watchObservedRunningTime="2025-11-28 20:51:51.117534135 +0000 UTC m=+150.586182044" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.142876 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hrcxz"] Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.149096 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.169536 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7fxs8" podStartSLOduration=131.16949218 podStartE2EDuration="2m11.16949218s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.164664233 +0000 UTC m=+150.633312142" watchObservedRunningTime="2025-11-28 20:51:51.16949218 +0000 UTC m=+150.638140089" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.173037 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrcxz"] Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.196744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.196988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plnfs\" (UniqueName: \"kubernetes.io/projected/c093d27c-da80-4125-93fa-47a03d1082c5-kube-api-access-plnfs\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.197052 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-utilities\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.197083 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-catalog-content\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.197123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-utilities\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.197157 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mpsz\" (UniqueName: \"kubernetes.io/projected/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-kube-api-access-5mpsz\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.197186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-catalog-content\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.197328 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.697309501 +0000 UTC m=+151.165957410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.198002 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-utilities\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.198055 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-catalog-content\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.216839 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plnfs\" (UniqueName: \"kubernetes.io/projected/c093d27c-da80-4125-93fa-47a03d1082c5-kube-api-access-plnfs\") pod \"certified-operators-7gdnl\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.256733 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-khfrs" podStartSLOduration=131.256718904 podStartE2EDuration="2m11.256718904s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.255586325 +0000 UTC m=+150.724234234" watchObservedRunningTime="2025-11-28 20:51:51.256718904 +0000 UTC m=+150.725366813" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.257967 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" podStartSLOduration=131.257961077 podStartE2EDuration="2m11.257961077s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.223948382 +0000 UTC m=+150.692596291" watchObservedRunningTime="2025-11-28 20:51:51.257961077 +0000 UTC m=+150.726608976" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.275224 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgnjn" podStartSLOduration=131.275196363 podStartE2EDuration="2m11.275196363s" podCreationTimestamp="2025-11-28 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.270146488 +0000 UTC m=+150.738794397" watchObservedRunningTime="2025-11-28 20:51:51.275196363 +0000 UTC m=+150.743844272" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.299564 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.305755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-utilities\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.305795 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mpsz\" (UniqueName: \"kubernetes.io/projected/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-kube-api-access-5mpsz\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.305821 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-catalog-content\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.305846 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.306126 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.806115131 +0000 UTC m=+151.274763040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.306608 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-utilities\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.307055 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-catalog-content\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.309246 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8z9hn"] Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.310411 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.329324 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z9hn"] Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.330036 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mpsz\" (UniqueName: \"kubernetes.io/projected/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-kube-api-access-5mpsz\") pod \"community-operators-hrcxz\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.384880 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:51 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:51 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:51 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.385244 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.407476 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.407767 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-utilities\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.407836 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qd2q\" (UniqueName: \"kubernetes.io/projected/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-kube-api-access-2qd2q\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.407894 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-catalog-content\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.408041 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:51.908025443 +0000 UTC m=+151.376673352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.512483 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.512541 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-utilities\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.512585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qd2q\" (UniqueName: \"kubernetes.io/projected/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-kube-api-access-2qd2q\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.512618 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-catalog-content\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.512981 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-catalog-content\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.513228 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:52.013199228 +0000 UTC m=+151.481847137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.513538 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-utilities\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.515501 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.557406 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qd2q\" (UniqueName: \"kubernetes.io/projected/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-kube-api-access-2qd2q\") pod \"certified-operators-8z9hn\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.617807 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.618197 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:52.118178606 +0000 UTC m=+151.586826515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.618625 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" event={"ID":"a29f65c5-800f-49f0-91e5-608c99574879","Type":"ContainerStarted","Data":"fe8cc35c3acdf1e441cfb3bef03f0e8dff5a42297d888cebe13782acb7ed3a3e"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.618667 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" event={"ID":"a29f65c5-800f-49f0-91e5-608c99574879","Type":"ContainerStarted","Data":"f65c0914ac4ff83973cea47e4a16b04aae7f756214c9af942979b9d1daea6a69"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.651188 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.720499 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g8h7t" event={"ID":"6b39d147-f628-444c-9333-37b05318296e","Type":"ContainerStarted","Data":"c32d80ff1ce0505e63d6027e30373652d80d01ff21344d4449b9a7504369b249"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.721627 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.721907 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:52.22189648 +0000 UTC m=+151.690544389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.741308 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77dcefd9a722fdd78d6657b74a5a28f965353bf3c4fd0cbfd150c96db347e089"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.746254 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lw4b4" event={"ID":"251de81b-bec9-441f-a17f-77269fbfb233","Type":"ContainerStarted","Data":"dbf5a40c33b3f05f225e16be11b2dcb08e816967d2b712f2468e826e32e970a0"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.746749 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lw4b4" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.754438 4957 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.784456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e30704461d190e722b17d3833ec4ee2110feadf0e46c29366120118d71701355"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.816528 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-knngc"] Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.822971 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.824095 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 20:51:52.324080631 +0000 UTC m=+151.792728540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.840144 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lw4b4" podStartSLOduration=9.840132146 podStartE2EDuration="9.840132146s" podCreationTimestamp="2025-11-28 20:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.837911859 +0000 UTC m=+151.306559768" watchObservedRunningTime="2025-11-28 20:51:51.840132146 +0000 UTC m=+151.308780055" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.854864 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" event={"ID":"70eb3caa-0533-4620-9dc5-4e5b9c4581bc","Type":"ContainerStarted","Data":"f7efd16c9f6f35795ec44cfe325a1c8a2ad3de7cd327363bdecce9235c30818e"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.858476 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-27cqh" event={"ID":"177bd2d3-3f98-43d3-93ff-5788659ad6da","Type":"ContainerStarted","Data":"ab18d2f02efd29ddbc0439893d2ff9fa095b7b415ffb9659ce15ff895b0d317a"} Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.900184 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" podStartSLOduration=132.900171071 podStartE2EDuration="2m12.900171071s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:51.898846095 +0000 UTC m=+151.367494004" watchObservedRunningTime="2025-11-28 20:51:51.900171071 +0000 UTC m=+151.368818980" Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.924931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:51 crc kubenswrapper[4957]: E1128 20:51:51.935891 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 20:51:52.435872365 +0000 UTC m=+151.904520374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jx2ts" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 20:51:51 crc kubenswrapper[4957]: I1128 20:51:51.949310 4957 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T20:51:51.754463065Z","Handler":null,"Name":""} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.026751 4957 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.026793 4957 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.028640 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.046418 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.132961 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.143310 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.143359 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.190584 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gdnl"] Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.235606 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jx2ts\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.260189 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrcxz"] Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.306238 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z9hn"] Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.363338 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:52 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:52 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:52 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.363388 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.534961 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.704486 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fzzwl"] Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.705702 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.708023 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.732629 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzzwl"] Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.789359 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jx2ts"] Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.824086 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.840407 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhwg\" (UniqueName: \"kubernetes.io/projected/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-kube-api-access-hmhwg\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.840506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-utilities\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.840589 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-catalog-content\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.866727 4957 generic.go:334] "Generic (PLEG): container finished" podID="c093d27c-da80-4125-93fa-47a03d1082c5" containerID="85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b" exitCode=0 Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.866812 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdnl" event={"ID":"c093d27c-da80-4125-93fa-47a03d1082c5","Type":"ContainerDied","Data":"85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.866847 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdnl" event={"ID":"c093d27c-da80-4125-93fa-47a03d1082c5","Type":"ContainerStarted","Data":"11f6e836154519e37885ffbd983752ae5543eb806b1f5ea62996aa9457e31ac9"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.868683 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.869579 4957 generic.go:334] "Generic (PLEG): container finished" podID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerID="9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3" exitCode=0 Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.869633 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knngc" event={"ID":"2a6ea13b-5dba-46d9-a947-3a08d376c195","Type":"ContainerDied","Data":"9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.869658 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knngc" event={"ID":"2a6ea13b-5dba-46d9-a947-3a08d376c195","Type":"ContainerStarted","Data":"8263108fbd90972c219f02aea36eb68348dd73a3fc99e99382f0267bce94e23d"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.875523 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" event={"ID":"a29f65c5-800f-49f0-91e5-608c99574879","Type":"ContainerStarted","Data":"6c991842f8af9c62080d9999bdd45e6448771ad64182e2a8b029bc55eb7426dd"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.875571 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" event={"ID":"a29f65c5-800f-49f0-91e5-608c99574879","Type":"ContainerStarted","Data":"ff1a989feb4bffb40a82338919bb3fefce9101f7f49625065fd54a6e4242f490"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.878594 4957 generic.go:334] "Generic (PLEG): container finished" podID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerID="07f463a83e9e187f1fc22d7dfa64159c80edb71db00b6dc7d1723d34da687d76" exitCode=0 Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.878639 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcxz" event={"ID":"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9","Type":"ContainerDied","Data":"07f463a83e9e187f1fc22d7dfa64159c80edb71db00b6dc7d1723d34da687d76"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.878673 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcxz" event={"ID":"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9","Type":"ContainerStarted","Data":"e3d72e48caf0fa4f06bdbaab28b9ce34f52f4c84fe8a980ed9dd23484f97b938"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.879643 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" event={"ID":"a876c4a2-51d7-4d80-a6f1-9111850bf727","Type":"ContainerStarted","Data":"d8d107e0abaf4008c229623c7aa46b0e9864e29b7de54676442436071f9f3ed3"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.881397 4957 generic.go:334] "Generic (PLEG): container finished" podID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerID="f86a1c043bb1217de98797c4e112e544f5b0985bf85073ed6e65627e076954c7" exitCode=0 Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.882365 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z9hn" event={"ID":"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4","Type":"ContainerDied","Data":"f86a1c043bb1217de98797c4e112e544f5b0985bf85073ed6e65627e076954c7"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.882398 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z9hn" event={"ID":"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4","Type":"ContainerStarted","Data":"0abe748a8f20c6fb34ee276e2a75eb5304e5f91109f8a1434113500d7b23da09"} Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.910452 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-r5xmw" podStartSLOduration=10.910435542 podStartE2EDuration="10.910435542s" podCreationTimestamp="2025-11-28 20:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:52.908534497 +0000 UTC m=+152.377182426" watchObservedRunningTime="2025-11-28 20:51:52.910435542 +0000 UTC m=+152.379083451" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.942308 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-utilities\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.942517 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-catalog-content\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.942698 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhwg\" (UniqueName: \"kubernetes.io/projected/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-kube-api-access-hmhwg\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.947229 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-utilities\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.947463 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-catalog-content\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:52 crc kubenswrapper[4957]: I1128 20:51:52.974583 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhwg\" (UniqueName: \"kubernetes.io/projected/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-kube-api-access-hmhwg\") pod \"redhat-marketplace-fzzwl\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.023077 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.108921 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qs6q2"] Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.112984 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.118642 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs6q2"] Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.248765 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76nr\" (UniqueName: \"kubernetes.io/projected/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-kube-api-access-f76nr\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.248824 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-utilities\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.248859 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-catalog-content\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.291823 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzzwl"] Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.349658 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f76nr\" (UniqueName: \"kubernetes.io/projected/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-kube-api-access-f76nr\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.349711 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-utilities\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.349740 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-catalog-content\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.350160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-utilities\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.350194 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-catalog-content\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.355819 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:53 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:53 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:53 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.356154 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.370952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76nr\" (UniqueName: \"kubernetes.io/projected/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-kube-api-access-f76nr\") pod \"redhat-marketplace-qs6q2\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.442919 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.650868 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs6q2"] Nov 28 20:51:53 crc kubenswrapper[4957]: W1128 20:51:53.665890 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc7d1ffc_0e94_4aa1_8068_23d1083c57ce.slice/crio-8d1c7cf230b13cef246258d4ab08b9eb0a56d3583b4b248ee318a05c69b76f3a WatchSource:0}: Error finding container 8d1c7cf230b13cef246258d4ab08b9eb0a56d3583b4b248ee318a05c69b76f3a: Status 404 returned error can't find the container with id 8d1c7cf230b13cef246258d4ab08b9eb0a56d3583b4b248ee318a05c69b76f3a Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.898372 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs6q2" event={"ID":"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce","Type":"ContainerDied","Data":"f0ec28726f1dea03a31bed1423e38a6d244aa54bd727b61ce1f0fe60d04e9bac"} Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.897828 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerID="f0ec28726f1dea03a31bed1423e38a6d244aa54bd727b61ce1f0fe60d04e9bac" exitCode=0 Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.898776 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs6q2" event={"ID":"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce","Type":"ContainerStarted","Data":"8d1c7cf230b13cef246258d4ab08b9eb0a56d3583b4b248ee318a05c69b76f3a"} Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.909722 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" event={"ID":"a876c4a2-51d7-4d80-a6f1-9111850bf727","Type":"ContainerStarted","Data":"bc4491953b9e1cb3bf843d6c9b98e297420fd500cc1fb6ff5a15b5fa7dbe0a16"} Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.910421 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.910924 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sn2tq"] Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.911916 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.914991 4957 generic.go:334] "Generic (PLEG): container finished" podID="d19f4f47-257a-4269-96f3-e8892c939e0b" containerID="d8a988f2d0674539ff0ec68ffa9b3f7fb21a767dd8ad81cba0c1fe76b607ecb4" exitCode=0 Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.915065 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" event={"ID":"d19f4f47-257a-4269-96f3-e8892c939e0b","Type":"ContainerDied","Data":"d8a988f2d0674539ff0ec68ffa9b3f7fb21a767dd8ad81cba0c1fe76b607ecb4"} Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.917262 4957 generic.go:334] "Generic (PLEG): container finished" podID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerID="955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df" exitCode=0 Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.918119 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzzwl" event={"ID":"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c","Type":"ContainerDied","Data":"955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df"} Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.918152 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzzwl" event={"ID":"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c","Type":"ContainerStarted","Data":"84a19931b4069045db1475c66007f69d560c5245ad856c951c0e829b3217c984"} Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.922150 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.935220 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sn2tq"] Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.958276 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-utilities\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.958442 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9cc\" (UniqueName: \"kubernetes.io/projected/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-kube-api-access-xl9cc\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.958494 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-catalog-content\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:53 crc kubenswrapper[4957]: I1128 20:51:53.996103 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" podStartSLOduration=134.9960825 podStartE2EDuration="2m14.9960825s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:51:53.994472604 +0000 UTC m=+153.463120513" watchObservedRunningTime="2025-11-28 20:51:53.9960825 +0000 UTC m=+153.464730409" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.059924 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-utilities\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.059998 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl9cc\" (UniqueName: \"kubernetes.io/projected/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-kube-api-access-xl9cc\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.060026 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-catalog-content\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.060629 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-utilities\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.061033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-catalog-content\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.082075 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl9cc\" (UniqueName: \"kubernetes.io/projected/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-kube-api-access-xl9cc\") pod \"redhat-operators-sn2tq\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.238595 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.302815 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.303340 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.303361 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qp4kq"] Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.304647 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.305216 4957 patch_prober.go:28] interesting pod/console-f9d7485db-6p7fc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.305256 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6p7fc" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.314119 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp4kq"] Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.356729 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:54 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:54 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:54 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.356814 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.364399 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qlpl\" (UniqueName: \"kubernetes.io/projected/a9881d17-454a-476d-903b-66b306a1290a-kube-api-access-2qlpl\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.364522 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-catalog-content\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.364706 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-utilities\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.473445 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-utilities\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.473545 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qlpl\" (UniqueName: \"kubernetes.io/projected/a9881d17-454a-476d-903b-66b306a1290a-kube-api-access-2qlpl\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.473631 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-catalog-content\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.474052 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-catalog-content\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.474308 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-utilities\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.491286 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qlpl\" (UniqueName: \"kubernetes.io/projected/a9881d17-454a-476d-903b-66b306a1290a-kube-api-access-2qlpl\") pod \"redhat-operators-qp4kq\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.564430 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sn2tq"] Nov 28 20:51:54 crc kubenswrapper[4957]: W1128 20:51:54.621538 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ed0d28_6db1_4f2a_87a6_07ffcc9d6ea7.slice/crio-7877f6b24b8b657ae3663d25b37cddb9e5c7a3d7b7c5162c7edcb10f25a857f2 WatchSource:0}: Error finding container 7877f6b24b8b657ae3663d25b37cddb9e5c7a3d7b7c5162c7edcb10f25a857f2: Status 404 returned error can't find the container with id 7877f6b24b8b657ae3663d25b37cddb9e5c7a3d7b7c5162c7edcb10f25a857f2 Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.631769 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.785577 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.787715 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.789413 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.789727 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.791720 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.883649 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44445d9d-4644-4a55-aab1-3b984f09a548-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.883728 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44445d9d-4644-4a55-aab1-3b984f09a548-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.923437 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qp4kq"] Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.935669 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sn2tq" event={"ID":"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7","Type":"ContainerStarted","Data":"7877f6b24b8b657ae3663d25b37cddb9e5c7a3d7b7c5162c7edcb10f25a857f2"} Nov 28 20:51:54 crc kubenswrapper[4957]: W1128 20:51:54.946420 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9881d17_454a_476d_903b_66b306a1290a.slice/crio-ea9b88bc7567cd6b0cef30cea6627a72633177dfbc6809a7bb7c7278379ac914 WatchSource:0}: Error finding container ea9b88bc7567cd6b0cef30cea6627a72633177dfbc6809a7bb7c7278379ac914: Status 404 returned error can't find the container with id ea9b88bc7567cd6b0cef30cea6627a72633177dfbc6809a7bb7c7278379ac914 Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.964393 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.964478 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.970456 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.975276 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xs7kj" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.984732 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44445d9d-4644-4a55-aab1-3b984f09a548-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.984810 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44445d9d-4644-4a55-aab1-3b984f09a548-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:54 crc kubenswrapper[4957]: I1128 20:51:54.985945 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44445d9d-4644-4a55-aab1-3b984f09a548-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.035175 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44445d9d-4644-4a55-aab1-3b984f09a548-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.109045 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.172831 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.288587 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19f4f47-257a-4269-96f3-e8892c939e0b-config-volume\") pod \"d19f4f47-257a-4269-96f3-e8892c939e0b\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.288759 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2frh\" (UniqueName: \"kubernetes.io/projected/d19f4f47-257a-4269-96f3-e8892c939e0b-kube-api-access-x2frh\") pod \"d19f4f47-257a-4269-96f3-e8892c939e0b\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.288794 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19f4f47-257a-4269-96f3-e8892c939e0b-secret-volume\") pod \"d19f4f47-257a-4269-96f3-e8892c939e0b\" (UID: \"d19f4f47-257a-4269-96f3-e8892c939e0b\") " Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.289379 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19f4f47-257a-4269-96f3-e8892c939e0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "d19f4f47-257a-4269-96f3-e8892c939e0b" (UID: "d19f4f47-257a-4269-96f3-e8892c939e0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.292926 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19f4f47-257a-4269-96f3-e8892c939e0b-kube-api-access-x2frh" (OuterVolumeSpecName: "kube-api-access-x2frh") pod "d19f4f47-257a-4269-96f3-e8892c939e0b" (UID: "d19f4f47-257a-4269-96f3-e8892c939e0b"). InnerVolumeSpecName "kube-api-access-x2frh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.292999 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19f4f47-257a-4269-96f3-e8892c939e0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d19f4f47-257a-4269-96f3-e8892c939e0b" (UID: "d19f4f47-257a-4269-96f3-e8892c939e0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.352070 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.355141 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:55 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:55 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:55 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.355194 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.390041 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19f4f47-257a-4269-96f3-e8892c939e0b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.390089 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19f4f47-257a-4269-96f3-e8892c939e0b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.390099 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2frh\" (UniqueName: \"kubernetes.io/projected/d19f4f47-257a-4269-96f3-e8892c939e0b-kube-api-access-x2frh\") on node \"crc\" DevicePath \"\"" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.422432 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 20:51:55 crc kubenswrapper[4957]: W1128 20:51:55.442143 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod44445d9d_4644_4a55_aab1_3b984f09a548.slice/crio-8f13704b034d525772fd01246535493d716e18a9d12d0bcde5efbc18f4f5450f WatchSource:0}: Error finding container 8f13704b034d525772fd01246535493d716e18a9d12d0bcde5efbc18f4f5450f: Status 404 returned error can't find the container with id 8f13704b034d525772fd01246535493d716e18a9d12d0bcde5efbc18f4f5450f Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.943066 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44445d9d-4644-4a55-aab1-3b984f09a548","Type":"ContainerStarted","Data":"8f13704b034d525772fd01246535493d716e18a9d12d0bcde5efbc18f4f5450f"} Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.950473 4957 generic.go:334] "Generic (PLEG): container finished" podID="a9881d17-454a-476d-903b-66b306a1290a" containerID="8f49f8c7722d6e617ca7aefc54fd72bf1c1b8b6dd57e7700b6b7ac8029563d88" exitCode=0 Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.950570 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerDied","Data":"8f49f8c7722d6e617ca7aefc54fd72bf1c1b8b6dd57e7700b6b7ac8029563d88"} Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.950636 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerStarted","Data":"ea9b88bc7567cd6b0cef30cea6627a72633177dfbc6809a7bb7c7278379ac914"} Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.968422 4957 generic.go:334] "Generic (PLEG): container finished" podID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerID="4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35" exitCode=0 Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.968542 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sn2tq" event={"ID":"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7","Type":"ContainerDied","Data":"4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35"} Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.981734 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.983610 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6" event={"ID":"d19f4f47-257a-4269-96f3-e8892c939e0b","Type":"ContainerDied","Data":"3d582c1887e5e71964ccb75236335a14eb816a4bf0a390fc808e792ee2ccc17b"} Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.986533 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d582c1887e5e71964ccb75236335a14eb816a4bf0a390fc808e792ee2ccc17b" Nov 28 20:51:55 crc kubenswrapper[4957]: I1128 20:51:55.992174 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8bq66" Nov 28 20:51:56 crc kubenswrapper[4957]: I1128 20:51:56.355998 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:56 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:56 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:56 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:56 crc kubenswrapper[4957]: I1128 20:51:56.356073 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:56 crc kubenswrapper[4957]: I1128 20:51:56.993329 4957 generic.go:334] "Generic (PLEG): container finished" podID="44445d9d-4644-4a55-aab1-3b984f09a548" containerID="ba4e18361308d24cc48c66daf845b5d20f4c448272843f9048bd87229c221449" exitCode=0 Nov 28 20:51:56 crc kubenswrapper[4957]: I1128 20:51:56.993604 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44445d9d-4644-4a55-aab1-3b984f09a548","Type":"ContainerDied","Data":"ba4e18361308d24cc48c66daf845b5d20f4c448272843f9048bd87229c221449"} Nov 28 20:51:57 crc kubenswrapper[4957]: I1128 20:51:57.354923 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:57 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:57 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:57 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:57 crc kubenswrapper[4957]: I1128 20:51:57.355298 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.284734 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.331862 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44445d9d-4644-4a55-aab1-3b984f09a548-kube-api-access\") pod \"44445d9d-4644-4a55-aab1-3b984f09a548\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.331961 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44445d9d-4644-4a55-aab1-3b984f09a548-kubelet-dir\") pod \"44445d9d-4644-4a55-aab1-3b984f09a548\" (UID: \"44445d9d-4644-4a55-aab1-3b984f09a548\") " Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.332308 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44445d9d-4644-4a55-aab1-3b984f09a548-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "44445d9d-4644-4a55-aab1-3b984f09a548" (UID: "44445d9d-4644-4a55-aab1-3b984f09a548"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.337665 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44445d9d-4644-4a55-aab1-3b984f09a548-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "44445d9d-4644-4a55-aab1-3b984f09a548" (UID: "44445d9d-4644-4a55-aab1-3b984f09a548"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.354234 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:58 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:58 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:58 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.354317 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.432952 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44445d9d-4644-4a55-aab1-3b984f09a548-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.432985 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44445d9d-4644-4a55-aab1-3b984f09a548-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.485977 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 20:51:58 crc kubenswrapper[4957]: E1128 20:51:58.486391 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44445d9d-4644-4a55-aab1-3b984f09a548" containerName="pruner" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.486407 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="44445d9d-4644-4a55-aab1-3b984f09a548" containerName="pruner" Nov 28 20:51:58 crc kubenswrapper[4957]: E1128 20:51:58.486424 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19f4f47-257a-4269-96f3-e8892c939e0b" containerName="collect-profiles" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.486516 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19f4f47-257a-4269-96f3-e8892c939e0b" containerName="collect-profiles" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.486713 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="44445d9d-4644-4a55-aab1-3b984f09a548" containerName="pruner" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.486741 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19f4f47-257a-4269-96f3-e8892c939e0b" containerName="collect-profiles" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.487633 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.490343 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.491792 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.492837 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.534240 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.534698 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.636186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.636322 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.636345 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.655261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:58 crc kubenswrapper[4957]: I1128 20:51:58.808547 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:51:59 crc kubenswrapper[4957]: I1128 20:51:59.009973 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44445d9d-4644-4a55-aab1-3b984f09a548","Type":"ContainerDied","Data":"8f13704b034d525772fd01246535493d716e18a9d12d0bcde5efbc18f4f5450f"} Nov 28 20:51:59 crc kubenswrapper[4957]: I1128 20:51:59.010009 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f13704b034d525772fd01246535493d716e18a9d12d0bcde5efbc18f4f5450f" Nov 28 20:51:59 crc kubenswrapper[4957]: I1128 20:51:59.010109 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 20:51:59 crc kubenswrapper[4957]: I1128 20:51:59.229258 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 20:51:59 crc kubenswrapper[4957]: I1128 20:51:59.361569 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:51:59 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:51:59 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:51:59 crc kubenswrapper[4957]: healthz check failed Nov 28 20:51:59 crc kubenswrapper[4957]: I1128 20:51:59.362030 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:52:00 crc kubenswrapper[4957]: I1128 20:52:00.025104 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a0003b3-fb9b-4126-af40-2b9b3bebdde0","Type":"ContainerStarted","Data":"632415d99893b73fbbe9b82a276f461d883c898248628c7122b7b244c4d23454"} Nov 28 20:52:00 crc kubenswrapper[4957]: I1128 20:52:00.025153 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a0003b3-fb9b-4126-af40-2b9b3bebdde0","Type":"ContainerStarted","Data":"31bfd48d3085d25c44e2e6d2fb039d9610d7da1ee1c3904453c37071b4941379"} Nov 28 20:52:00 crc kubenswrapper[4957]: I1128 20:52:00.355561 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:52:00 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:52:00 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:52:00 crc kubenswrapper[4957]: healthz check failed Nov 28 20:52:00 crc kubenswrapper[4957]: I1128 20:52:00.355861 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:52:00 crc kubenswrapper[4957]: I1128 20:52:00.834805 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lw4b4" Nov 28 20:52:01 crc kubenswrapper[4957]: I1128 20:52:01.043122 4957 generic.go:334] "Generic (PLEG): container finished" podID="1a0003b3-fb9b-4126-af40-2b9b3bebdde0" containerID="632415d99893b73fbbe9b82a276f461d883c898248628c7122b7b244c4d23454" exitCode=0 Nov 28 20:52:01 crc kubenswrapper[4957]: I1128 20:52:01.043173 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a0003b3-fb9b-4126-af40-2b9b3bebdde0","Type":"ContainerDied","Data":"632415d99893b73fbbe9b82a276f461d883c898248628c7122b7b244c4d23454"} Nov 28 20:52:01 crc kubenswrapper[4957]: I1128 20:52:01.356245 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:52:01 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:52:01 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:52:01 crc kubenswrapper[4957]: healthz check failed Nov 28 20:52:01 crc kubenswrapper[4957]: I1128 20:52:01.356308 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.296228 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.301108 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cccab1fe-132a-4c45-909b-6f1ba7c8abab-metrics-certs\") pod \"network-metrics-daemon-7zhxb\" (UID: \"cccab1fe-132a-4c45-909b-6f1ba7c8abab\") " pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.328452 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.356292 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:52:02 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:52:02 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:52:02 crc kubenswrapper[4957]: healthz check failed Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.356390 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.397446 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kubelet-dir\") pod \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.397565 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kube-api-access\") pod \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\" (UID: \"1a0003b3-fb9b-4126-af40-2b9b3bebdde0\") " Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.397678 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a0003b3-fb9b-4126-af40-2b9b3bebdde0" (UID: "1a0003b3-fb9b-4126-af40-2b9b3bebdde0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.409473 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a0003b3-fb9b-4126-af40-2b9b3bebdde0" (UID: "1a0003b3-fb9b-4126-af40-2b9b3bebdde0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.456861 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7zhxb" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.498547 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.498588 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0003b3-fb9b-4126-af40-2b9b3bebdde0-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:52:02 crc kubenswrapper[4957]: I1128 20:52:02.659494 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7zhxb"] Nov 28 20:52:03 crc kubenswrapper[4957]: I1128 20:52:03.057560 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a0003b3-fb9b-4126-af40-2b9b3bebdde0","Type":"ContainerDied","Data":"31bfd48d3085d25c44e2e6d2fb039d9610d7da1ee1c3904453c37071b4941379"} Nov 28 20:52:03 crc kubenswrapper[4957]: I1128 20:52:03.057595 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31bfd48d3085d25c44e2e6d2fb039d9610d7da1ee1c3904453c37071b4941379" Nov 28 20:52:03 crc kubenswrapper[4957]: I1128 20:52:03.057649 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 20:52:03 crc kubenswrapper[4957]: I1128 20:52:03.354699 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 20:52:03 crc kubenswrapper[4957]: [-]has-synced failed: reason withheld Nov 28 20:52:03 crc kubenswrapper[4957]: [+]process-running ok Nov 28 20:52:03 crc kubenswrapper[4957]: healthz check failed Nov 28 20:52:03 crc kubenswrapper[4957]: I1128 20:52:03.354773 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 20:52:04 crc kubenswrapper[4957]: I1128 20:52:04.304468 4957 patch_prober.go:28] interesting pod/console-f9d7485db-6p7fc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 28 20:52:04 crc kubenswrapper[4957]: I1128 20:52:04.304839 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6p7fc" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 28 20:52:04 crc kubenswrapper[4957]: I1128 20:52:04.354716 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:52:04 crc kubenswrapper[4957]: I1128 20:52:04.358356 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-t54js" Nov 28 20:52:08 crc kubenswrapper[4957]: I1128 20:52:08.992982 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:52:08 crc kubenswrapper[4957]: I1128 20:52:08.993457 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:52:09 crc kubenswrapper[4957]: W1128 20:52:09.679362 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcccab1fe_132a_4c45_909b_6f1ba7c8abab.slice/crio-7ea27c915266b972bf4175666d4161abe22d0cd836feb5dba24c704c6bc5b2c6 WatchSource:0}: Error finding container 7ea27c915266b972bf4175666d4161abe22d0cd836feb5dba24c704c6bc5b2c6: Status 404 returned error can't find the container with id 7ea27c915266b972bf4175666d4161abe22d0cd836feb5dba24c704c6bc5b2c6 Nov 28 20:52:10 crc kubenswrapper[4957]: I1128 20:52:10.105079 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" event={"ID":"cccab1fe-132a-4c45-909b-6f1ba7c8abab","Type":"ContainerStarted","Data":"7ea27c915266b972bf4175666d4161abe22d0cd836feb5dba24c704c6bc5b2c6"} Nov 28 20:52:12 crc kubenswrapper[4957]: I1128 20:52:12.539966 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:52:14 crc kubenswrapper[4957]: I1128 20:52:14.306923 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:52:14 crc kubenswrapper[4957]: I1128 20:52:14.310752 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:52:23 crc kubenswrapper[4957]: E1128 20:52:23.412931 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 20:52:23 crc kubenswrapper[4957]: E1128 20:52:23.413435 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r28fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-knngc_openshift-marketplace(2a6ea13b-5dba-46d9-a947-3a08d376c195): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:23 crc kubenswrapper[4957]: E1128 20:52:23.414716 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-knngc" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" Nov 28 20:52:25 crc kubenswrapper[4957]: I1128 20:52:25.145063 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8lgvk" Nov 28 20:52:27 crc kubenswrapper[4957]: I1128 20:52:27.982923 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.086120 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-knngc" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.300717 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.300962 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mpsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hrcxz_openshift-marketplace(1abfdb12-1213-4ec8-b2f0-c1b4bde073d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.302188 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hrcxz" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.462340 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.462647 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qd2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8z9hn_openshift-marketplace(aed60da8-b12f-4fcd-81f6-8fbfcddf08b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:30 crc kubenswrapper[4957]: E1128 20:52:30.463871 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8z9hn" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.572967 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hrcxz" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.573054 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8z9hn" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.640252 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.640396 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmhwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fzzwl_openshift-marketplace(2b119a86-1fc6-45aa-8b80-3abfc5a36c7c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.641589 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fzzwl" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.656585 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.656697 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f76nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qs6q2_openshift-marketplace(bc7d1ffc-0e94-4aa1-8068-23d1083c57ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.658000 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qs6q2" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.814665 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.815042 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plnfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7gdnl_openshift-marketplace(c093d27c-da80-4125-93fa-47a03d1082c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:33 crc kubenswrapper[4957]: E1128 20:52:33.816507 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7gdnl" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.952616 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 20:52:34 crc kubenswrapper[4957]: E1128 20:52:34.953053 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0003b3-fb9b-4126-af40-2b9b3bebdde0" containerName="pruner" Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.953069 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0003b3-fb9b-4126-af40-2b9b3bebdde0" containerName="pruner" Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.953262 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0003b3-fb9b-4126-af40-2b9b3bebdde0" containerName="pruner" Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.953692 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.959697 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.959736 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 20:52:34 crc kubenswrapper[4957]: I1128 20:52:34.960079 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.059417 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4878ba56-0560-423f-b6c9-150b41885422-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.059466 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4878ba56-0560-423f-b6c9-150b41885422-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.160542 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4878ba56-0560-423f-b6c9-150b41885422-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.160885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4878ba56-0560-423f-b6c9-150b41885422-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.161004 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4878ba56-0560-423f-b6c9-150b41885422-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.182806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4878ba56-0560-423f-b6c9-150b41885422-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:35 crc kubenswrapper[4957]: I1128 20:52:35.288801 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.764352 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fzzwl" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.764435 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7gdnl" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.764463 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qs6q2" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.818565 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.819094 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl9cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sn2tq_openshift-marketplace(c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.820615 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sn2tq" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.834957 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.835178 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qlpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qp4kq_openshift-marketplace(a9881d17-454a-476d-903b-66b306a1290a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 20:52:36 crc kubenswrapper[4957]: E1128 20:52:36.836455 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qp4kq" podUID="a9881d17-454a-476d-903b-66b306a1290a" Nov 28 20:52:37 crc kubenswrapper[4957]: I1128 20:52:37.162543 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 20:52:37 crc kubenswrapper[4957]: W1128 20:52:37.172109 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4878ba56_0560_423f_b6c9_150b41885422.slice/crio-c4812600852b8db8702d614b8adf5b1f4871cd540b54caf97fe562de7635620c WatchSource:0}: Error finding container c4812600852b8db8702d614b8adf5b1f4871cd540b54caf97fe562de7635620c: Status 404 returned error can't find the container with id c4812600852b8db8702d614b8adf5b1f4871cd540b54caf97fe562de7635620c Nov 28 20:52:37 crc kubenswrapper[4957]: I1128 20:52:37.255975 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" event={"ID":"cccab1fe-132a-4c45-909b-6f1ba7c8abab","Type":"ContainerStarted","Data":"fbdc5be1c550f2fd050bc4002ab5ebadad1bfe4ff97e4eaea604fce30517b115"} Nov 28 20:52:37 crc kubenswrapper[4957]: I1128 20:52:37.256027 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7zhxb" event={"ID":"cccab1fe-132a-4c45-909b-6f1ba7c8abab","Type":"ContainerStarted","Data":"fb8a1eb647f3bd0cabd6e4a66c0fd1c1e224077faa24a4d38fd3af8b2023a417"} Nov 28 20:52:37 crc kubenswrapper[4957]: I1128 20:52:37.257687 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4878ba56-0560-423f-b6c9-150b41885422","Type":"ContainerStarted","Data":"c4812600852b8db8702d614b8adf5b1f4871cd540b54caf97fe562de7635620c"} Nov 28 20:52:37 crc kubenswrapper[4957]: E1128 20:52:37.259991 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sn2tq" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" Nov 28 20:52:37 crc kubenswrapper[4957]: E1128 20:52:37.260067 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qp4kq" podUID="a9881d17-454a-476d-903b-66b306a1290a" Nov 28 20:52:37 crc kubenswrapper[4957]: I1128 20:52:37.283793 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7zhxb" podStartSLOduration=178.283771745 podStartE2EDuration="2m58.283771745s" podCreationTimestamp="2025-11-28 20:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:52:37.277729015 +0000 UTC m=+196.746376934" watchObservedRunningTime="2025-11-28 20:52:37.283771745 +0000 UTC m=+196.752419674" Nov 28 20:52:38 crc kubenswrapper[4957]: I1128 20:52:38.266354 4957 generic.go:334] "Generic (PLEG): container finished" podID="4878ba56-0560-423f-b6c9-150b41885422" containerID="430a644cdfa74923b256df259d9b9ee8f633e8f7485ac6093db7e58b58069b53" exitCode=0 Nov 28 20:52:38 crc kubenswrapper[4957]: I1128 20:52:38.266622 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4878ba56-0560-423f-b6c9-150b41885422","Type":"ContainerDied","Data":"430a644cdfa74923b256df259d9b9ee8f633e8f7485ac6093db7e58b58069b53"} Nov 28 20:52:38 crc kubenswrapper[4957]: I1128 20:52:38.993444 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:52:38 crc kubenswrapper[4957]: I1128 20:52:38.994071 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.500287 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.524276 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4878ba56-0560-423f-b6c9-150b41885422-kubelet-dir\") pod \"4878ba56-0560-423f-b6c9-150b41885422\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.524320 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4878ba56-0560-423f-b6c9-150b41885422-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4878ba56-0560-423f-b6c9-150b41885422" (UID: "4878ba56-0560-423f-b6c9-150b41885422"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.524620 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4878ba56-0560-423f-b6c9-150b41885422-kube-api-access\") pod \"4878ba56-0560-423f-b6c9-150b41885422\" (UID: \"4878ba56-0560-423f-b6c9-150b41885422\") " Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.525072 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4878ba56-0560-423f-b6c9-150b41885422-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.531006 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4878ba56-0560-423f-b6c9-150b41885422-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4878ba56-0560-423f-b6c9-150b41885422" (UID: "4878ba56-0560-423f-b6c9-150b41885422"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:52:39 crc kubenswrapper[4957]: I1128 20:52:39.625959 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4878ba56-0560-423f-b6c9-150b41885422-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:52:40 crc kubenswrapper[4957]: I1128 20:52:40.276446 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4878ba56-0560-423f-b6c9-150b41885422","Type":"ContainerDied","Data":"c4812600852b8db8702d614b8adf5b1f4871cd540b54caf97fe562de7635620c"} Nov 28 20:52:40 crc kubenswrapper[4957]: I1128 20:52:40.276487 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4812600852b8db8702d614b8adf5b1f4871cd540b54caf97fe562de7635620c" Nov 28 20:52:40 crc kubenswrapper[4957]: I1128 20:52:40.276492 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 20:52:41 crc kubenswrapper[4957]: I1128 20:52:41.949733 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 20:52:41 crc kubenswrapper[4957]: E1128 20:52:41.951462 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4878ba56-0560-423f-b6c9-150b41885422" containerName="pruner" Nov 28 20:52:41 crc kubenswrapper[4957]: I1128 20:52:41.951512 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="4878ba56-0560-423f-b6c9-150b41885422" containerName="pruner" Nov 28 20:52:41 crc kubenswrapper[4957]: I1128 20:52:41.951713 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="4878ba56-0560-423f-b6c9-150b41885422" containerName="pruner" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.007333 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.009588 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.011122 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.012286 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.104568 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kube-api-access\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.104622 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-var-lock\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.104656 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.206080 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kube-api-access\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.206138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-var-lock\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.206170 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.206267 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.206325 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-var-lock\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.224285 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kube-api-access\") pod \"installer-9-crc\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.329479 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:52:42 crc kubenswrapper[4957]: I1128 20:52:42.722820 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 20:52:43 crc kubenswrapper[4957]: I1128 20:52:43.291825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba","Type":"ContainerStarted","Data":"da05f2889023bb8d7058e4f5a46fbb5469a8cb4dc98379918cf11aaea11f23b9"} Nov 28 20:52:43 crc kubenswrapper[4957]: I1128 20:52:43.291870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba","Type":"ContainerStarted","Data":"14147d72bdeadff7f65a3cb5b495ce09bf1cad8532067ae18bb6001dccd9d619"} Nov 28 20:52:43 crc kubenswrapper[4957]: I1128 20:52:43.313446 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.313423316 podStartE2EDuration="2.313423316s" podCreationTimestamp="2025-11-28 20:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:52:43.310119819 +0000 UTC m=+202.778767778" watchObservedRunningTime="2025-11-28 20:52:43.313423316 +0000 UTC m=+202.782071245" Nov 28 20:52:44 crc kubenswrapper[4957]: I1128 20:52:44.299076 4957 generic.go:334] "Generic (PLEG): container finished" podID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerID="b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec" exitCode=0 Nov 28 20:52:44 crc kubenswrapper[4957]: I1128 20:52:44.299239 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knngc" event={"ID":"2a6ea13b-5dba-46d9-a947-3a08d376c195","Type":"ContainerDied","Data":"b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec"} Nov 28 20:52:46 crc kubenswrapper[4957]: I1128 20:52:46.311900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knngc" event={"ID":"2a6ea13b-5dba-46d9-a947-3a08d376c195","Type":"ContainerStarted","Data":"f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838"} Nov 28 20:52:46 crc kubenswrapper[4957]: I1128 20:52:46.329793 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-knngc" podStartSLOduration=3.785851913 podStartE2EDuration="56.32977449s" podCreationTimestamp="2025-11-28 20:51:50 +0000 UTC" firstStartedPulling="2025-11-28 20:51:52.871044621 +0000 UTC m=+152.339692530" lastFinishedPulling="2025-11-28 20:52:45.414967208 +0000 UTC m=+204.883615107" observedRunningTime="2025-11-28 20:52:46.329523137 +0000 UTC m=+205.798171046" watchObservedRunningTime="2025-11-28 20:52:46.32977449 +0000 UTC m=+205.798422399" Nov 28 20:52:48 crc kubenswrapper[4957]: I1128 20:52:48.330673 4957 generic.go:334] "Generic (PLEG): container finished" podID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerID="5ebe020491ef13928310bdc553aae416acb9d22869f89c14924d2f8490dad84c" exitCode=0 Nov 28 20:52:48 crc kubenswrapper[4957]: I1128 20:52:48.330774 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z9hn" event={"ID":"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4","Type":"ContainerDied","Data":"5ebe020491ef13928310bdc553aae416acb9d22869f89c14924d2f8490dad84c"} Nov 28 20:52:49 crc kubenswrapper[4957]: I1128 20:52:49.340818 4957 generic.go:334] "Generic (PLEG): container finished" podID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerID="a7e6bc2aab55216f18caf33a0c162440b21ac2f18e63af16e5aafb3853b89142" exitCode=0 Nov 28 20:52:49 crc kubenswrapper[4957]: I1128 20:52:49.340981 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcxz" event={"ID":"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9","Type":"ContainerDied","Data":"a7e6bc2aab55216f18caf33a0c162440b21ac2f18e63af16e5aafb3853b89142"} Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.348426 4957 generic.go:334] "Generic (PLEG): container finished" podID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerID="4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2" exitCode=0 Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.348497 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzzwl" event={"ID":"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c","Type":"ContainerDied","Data":"4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2"} Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.351985 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcxz" event={"ID":"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9","Type":"ContainerStarted","Data":"c486d3f0254b30205886aebf0a86b2015026f15f8f32eb5d5617944a3f385bd1"} Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.355817 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z9hn" event={"ID":"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4","Type":"ContainerStarted","Data":"953124c2090285e169ba584b73c984d350da1ae2294061f5dbb4318c804d30c8"} Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.357825 4957 generic.go:334] "Generic (PLEG): container finished" podID="c093d27c-da80-4125-93fa-47a03d1082c5" containerID="47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf" exitCode=0 Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.357859 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdnl" event={"ID":"c093d27c-da80-4125-93fa-47a03d1082c5","Type":"ContainerDied","Data":"47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf"} Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.392749 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hrcxz" podStartSLOduration=2.550317669 podStartE2EDuration="59.392730597s" podCreationTimestamp="2025-11-28 20:51:51 +0000 UTC" firstStartedPulling="2025-11-28 20:51:52.880012321 +0000 UTC m=+152.348660230" lastFinishedPulling="2025-11-28 20:52:49.722425249 +0000 UTC m=+209.191073158" observedRunningTime="2025-11-28 20:52:50.390248859 +0000 UTC m=+209.858896768" watchObservedRunningTime="2025-11-28 20:52:50.392730597 +0000 UTC m=+209.861378506" Nov 28 20:52:50 crc kubenswrapper[4957]: I1128 20:52:50.436831 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8z9hn" podStartSLOduration=3.061792022 podStartE2EDuration="59.436804516s" podCreationTimestamp="2025-11-28 20:51:51 +0000 UTC" firstStartedPulling="2025-11-28 20:51:52.88289105 +0000 UTC m=+152.351538959" lastFinishedPulling="2025-11-28 20:52:49.257903544 +0000 UTC m=+208.726551453" observedRunningTime="2025-11-28 20:52:50.434551801 +0000 UTC m=+209.903199710" watchObservedRunningTime="2025-11-28 20:52:50.436804516 +0000 UTC m=+209.905452425" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.088442 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.089004 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.153351 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.366915 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzzwl" event={"ID":"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c","Type":"ContainerStarted","Data":"cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08"} Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.369060 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdnl" event={"ID":"c093d27c-da80-4125-93fa-47a03d1082c5","Type":"ContainerStarted","Data":"b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d"} Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.386698 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fzzwl" podStartSLOduration=2.536884342 podStartE2EDuration="59.386677027s" podCreationTimestamp="2025-11-28 20:51:52 +0000 UTC" firstStartedPulling="2025-11-28 20:51:53.919820274 +0000 UTC m=+153.388468193" lastFinishedPulling="2025-11-28 20:52:50.769612969 +0000 UTC m=+210.238260878" observedRunningTime="2025-11-28 20:52:51.385046189 +0000 UTC m=+210.853694118" watchObservedRunningTime="2025-11-28 20:52:51.386677027 +0000 UTC m=+210.855324956" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.408125 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gdnl" podStartSLOduration=3.403678161 podStartE2EDuration="1m1.408100575s" podCreationTimestamp="2025-11-28 20:51:50 +0000 UTC" firstStartedPulling="2025-11-28 20:51:52.868441761 +0000 UTC m=+152.337089660" lastFinishedPulling="2025-11-28 20:52:50.872864165 +0000 UTC m=+210.341512074" observedRunningTime="2025-11-28 20:52:51.404315993 +0000 UTC m=+210.872963912" watchObservedRunningTime="2025-11-28 20:52:51.408100575 +0000 UTC m=+210.876748494" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.418881 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.515953 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.516022 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.651929 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.652007 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:52:51 crc kubenswrapper[4957]: I1128 20:52:51.703293 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:52:52 crc kubenswrapper[4957]: I1128 20:52:52.376122 4957 generic.go:334] "Generic (PLEG): container finished" podID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerID="89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2" exitCode=0 Nov 28 20:52:52 crc kubenswrapper[4957]: I1128 20:52:52.376227 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sn2tq" event={"ID":"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7","Type":"ContainerDied","Data":"89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2"} Nov 28 20:52:52 crc kubenswrapper[4957]: I1128 20:52:52.553377 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hrcxz" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="registry-server" probeResult="failure" output=< Nov 28 20:52:52 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 20:52:52 crc kubenswrapper[4957]: > Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.023285 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.023555 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.170576 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.382458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerStarted","Data":"c0f6b65ce236cde94743c7638d3496158665a612626f81059cccdabb98c462c4"} Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.384285 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sn2tq" event={"ID":"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7","Type":"ContainerStarted","Data":"79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc"} Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.385967 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerID="dc9cc1ced89089b648c90673640e5c5fb821835b591f08e0978e1719e7063bad" exitCode=0 Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.386346 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs6q2" event={"ID":"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce","Type":"ContainerDied","Data":"dc9cc1ced89089b648c90673640e5c5fb821835b591f08e0978e1719e7063bad"} Nov 28 20:52:53 crc kubenswrapper[4957]: I1128 20:52:53.421864 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sn2tq" podStartSLOduration=3.598990944 podStartE2EDuration="1m0.421844202s" podCreationTimestamp="2025-11-28 20:51:53 +0000 UTC" firstStartedPulling="2025-11-28 20:51:55.970885644 +0000 UTC m=+155.439533553" lastFinishedPulling="2025-11-28 20:52:52.793738902 +0000 UTC m=+212.262386811" observedRunningTime="2025-11-28 20:52:53.418477465 +0000 UTC m=+212.887125374" watchObservedRunningTime="2025-11-28 20:52:53.421844202 +0000 UTC m=+212.890492111" Nov 28 20:52:54 crc kubenswrapper[4957]: I1128 20:52:54.239808 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:52:54 crc kubenswrapper[4957]: I1128 20:52:54.240251 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:52:54 crc kubenswrapper[4957]: I1128 20:52:54.402406 4957 generic.go:334] "Generic (PLEG): container finished" podID="a9881d17-454a-476d-903b-66b306a1290a" containerID="c0f6b65ce236cde94743c7638d3496158665a612626f81059cccdabb98c462c4" exitCode=0 Nov 28 20:52:54 crc kubenswrapper[4957]: I1128 20:52:54.402509 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerDied","Data":"c0f6b65ce236cde94743c7638d3496158665a612626f81059cccdabb98c462c4"} Nov 28 20:52:55 crc kubenswrapper[4957]: I1128 20:52:55.293051 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sn2tq" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="registry-server" probeResult="failure" output=< Nov 28 20:52:55 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 20:52:55 crc kubenswrapper[4957]: > Nov 28 20:52:55 crc kubenswrapper[4957]: I1128 20:52:55.411830 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs6q2" event={"ID":"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce","Type":"ContainerStarted","Data":"c1cc8d2b72cfaeccdb6fe329ec7f3f12a56b006a039a7a1e29c2f81e6a423ead"} Nov 28 20:52:55 crc kubenswrapper[4957]: I1128 20:52:55.432072 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qs6q2" podStartSLOduration=2.530483088 podStartE2EDuration="1m2.43205192s" podCreationTimestamp="2025-11-28 20:51:53 +0000 UTC" firstStartedPulling="2025-11-28 20:51:53.905287142 +0000 UTC m=+153.373935051" lastFinishedPulling="2025-11-28 20:52:53.806855974 +0000 UTC m=+213.275503883" observedRunningTime="2025-11-28 20:52:55.429067117 +0000 UTC m=+214.897715036" watchObservedRunningTime="2025-11-28 20:52:55.43205192 +0000 UTC m=+214.900699839" Nov 28 20:52:59 crc kubenswrapper[4957]: I1128 20:52:59.435461 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerStarted","Data":"ea4d2e7bba6e5a4d0a27d64c70cede7525674456a4d27498e5110f744f6ed10d"} Nov 28 20:52:59 crc kubenswrapper[4957]: I1128 20:52:59.465182 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qp4kq" podStartSLOduration=2.6971621409999997 podStartE2EDuration="1m5.465153276s" podCreationTimestamp="2025-11-28 20:51:54 +0000 UTC" firstStartedPulling="2025-11-28 20:51:55.952451477 +0000 UTC m=+155.421099386" lastFinishedPulling="2025-11-28 20:52:58.720442612 +0000 UTC m=+218.189090521" observedRunningTime="2025-11-28 20:52:59.457635533 +0000 UTC m=+218.926283512" watchObservedRunningTime="2025-11-28 20:52:59.465153276 +0000 UTC m=+218.933801225" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.300944 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.301000 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.362648 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.494946 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.570989 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.618052 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:53:01 crc kubenswrapper[4957]: I1128 20:53:01.695728 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.078126 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.394274 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrcxz"] Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.443289 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.443375 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.453923 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hrcxz" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="registry-server" containerID="cri-o://c486d3f0254b30205886aebf0a86b2015026f15f8f32eb5d5617944a3f385bd1" gracePeriod=2 Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.498748 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.540136 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.992720 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z9hn"] Nov 28 20:53:03 crc kubenswrapper[4957]: I1128 20:53:03.992970 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8z9hn" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="registry-server" containerID="cri-o://953124c2090285e169ba584b73c984d350da1ae2294061f5dbb4318c804d30c8" gracePeriod=2 Nov 28 20:53:04 crc kubenswrapper[4957]: I1128 20:53:04.289599 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:53:04 crc kubenswrapper[4957]: I1128 20:53:04.326860 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:53:04 crc kubenswrapper[4957]: I1128 20:53:04.632678 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:53:04 crc kubenswrapper[4957]: I1128 20:53:04.632717 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:53:04 crc kubenswrapper[4957]: I1128 20:53:04.667449 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:53:05 crc kubenswrapper[4957]: I1128 20:53:05.501457 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:53:05 crc kubenswrapper[4957]: I1128 20:53:05.799134 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs6q2"] Nov 28 20:53:05 crc kubenswrapper[4957]: I1128 20:53:05.799775 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qs6q2" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="registry-server" containerID="cri-o://c1cc8d2b72cfaeccdb6fe329ec7f3f12a56b006a039a7a1e29c2f81e6a423ead" gracePeriod=2 Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.475734 4957 generic.go:334] "Generic (PLEG): container finished" podID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerID="c1cc8d2b72cfaeccdb6fe329ec7f3f12a56b006a039a7a1e29c2f81e6a423ead" exitCode=0 Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.475814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs6q2" event={"ID":"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce","Type":"ContainerDied","Data":"c1cc8d2b72cfaeccdb6fe329ec7f3f12a56b006a039a7a1e29c2f81e6a423ead"} Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.478671 4957 generic.go:334] "Generic (PLEG): container finished" podID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerID="c486d3f0254b30205886aebf0a86b2015026f15f8f32eb5d5617944a3f385bd1" exitCode=0 Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.478735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcxz" event={"ID":"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9","Type":"ContainerDied","Data":"c486d3f0254b30205886aebf0a86b2015026f15f8f32eb5d5617944a3f385bd1"} Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.481102 4957 generic.go:334] "Generic (PLEG): container finished" podID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerID="953124c2090285e169ba584b73c984d350da1ae2294061f5dbb4318c804d30c8" exitCode=0 Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.481151 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z9hn" event={"ID":"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4","Type":"ContainerDied","Data":"953124c2090285e169ba584b73c984d350da1ae2294061f5dbb4318c804d30c8"} Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.541421 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.711903 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.719405 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mpsz\" (UniqueName: \"kubernetes.io/projected/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-kube-api-access-5mpsz\") pod \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.719531 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-utilities\") pod \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.719551 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-catalog-content\") pod \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\" (UID: \"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9\") " Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.724275 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-utilities" (OuterVolumeSpecName: "utilities") pod "1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" (UID: "1abfdb12-1213-4ec8-b2f0-c1b4bde073d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.731034 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-kube-api-access-5mpsz" (OuterVolumeSpecName: "kube-api-access-5mpsz") pod "1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" (UID: "1abfdb12-1213-4ec8-b2f0-c1b4bde073d9"). InnerVolumeSpecName "kube-api-access-5mpsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.784029 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" (UID: "1abfdb12-1213-4ec8-b2f0-c1b4bde073d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.821017 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-catalog-content\") pod \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.821431 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f76nr\" (UniqueName: \"kubernetes.io/projected/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-kube-api-access-f76nr\") pod \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.821468 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-utilities\") pod \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\" (UID: \"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce\") " Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.821657 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mpsz\" (UniqueName: \"kubernetes.io/projected/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-kube-api-access-5mpsz\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.821675 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.821688 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.822424 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-utilities" (OuterVolumeSpecName: "utilities") pod "bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" (UID: "bc7d1ffc-0e94-4aa1-8068-23d1083c57ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.825898 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-kube-api-access-f76nr" (OuterVolumeSpecName: "kube-api-access-f76nr") pod "bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" (UID: "bc7d1ffc-0e94-4aa1-8068-23d1083c57ce"). InnerVolumeSpecName "kube-api-access-f76nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.844715 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" (UID: "bc7d1ffc-0e94-4aa1-8068-23d1083c57ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.924570 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f76nr\" (UniqueName: \"kubernetes.io/projected/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-kube-api-access-f76nr\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.924609 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:06 crc kubenswrapper[4957]: I1128 20:53:06.924619 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.093442 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.227971 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-utilities\") pod \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.228022 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-catalog-content\") pod \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.228125 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qd2q\" (UniqueName: \"kubernetes.io/projected/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-kube-api-access-2qd2q\") pod \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\" (UID: \"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4\") " Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.228847 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-utilities" (OuterVolumeSpecName: "utilities") pod "aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" (UID: "aed60da8-b12f-4fcd-81f6-8fbfcddf08b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.230970 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-kube-api-access-2qd2q" (OuterVolumeSpecName: "kube-api-access-2qd2q") pod "aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" (UID: "aed60da8-b12f-4fcd-81f6-8fbfcddf08b4"). InnerVolumeSpecName "kube-api-access-2qd2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.271011 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" (UID: "aed60da8-b12f-4fcd-81f6-8fbfcddf08b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.329987 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.330033 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.330050 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qd2q\" (UniqueName: \"kubernetes.io/projected/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4-kube-api-access-2qd2q\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.488713 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs6q2" event={"ID":"bc7d1ffc-0e94-4aa1-8068-23d1083c57ce","Type":"ContainerDied","Data":"8d1c7cf230b13cef246258d4ab08b9eb0a56d3583b4b248ee318a05c69b76f3a"} Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.488775 4957 scope.go:117] "RemoveContainer" containerID="c1cc8d2b72cfaeccdb6fe329ec7f3f12a56b006a039a7a1e29c2f81e6a423ead" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.488776 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs6q2" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.492384 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrcxz" event={"ID":"1abfdb12-1213-4ec8-b2f0-c1b4bde073d9","Type":"ContainerDied","Data":"e3d72e48caf0fa4f06bdbaab28b9ce34f52f4c84fe8a980ed9dd23484f97b938"} Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.492441 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrcxz" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.509063 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrcxz"] Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.512542 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hrcxz"] Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.570439 4957 scope.go:117] "RemoveContainer" containerID="dc9cc1ced89089b648c90673640e5c5fb821835b591f08e0978e1719e7063bad" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.571694 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z9hn" event={"ID":"aed60da8-b12f-4fcd-81f6-8fbfcddf08b4","Type":"ContainerDied","Data":"0abe748a8f20c6fb34ee276e2a75eb5304e5f91109f8a1434113500d7b23da09"} Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.571794 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z9hn" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.583794 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs6q2"] Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.591786 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs6q2"] Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.593700 4957 scope.go:117] "RemoveContainer" containerID="f0ec28726f1dea03a31bed1423e38a6d244aa54bd727b61ce1f0fe60d04e9bac" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.608457 4957 scope.go:117] "RemoveContainer" containerID="c486d3f0254b30205886aebf0a86b2015026f15f8f32eb5d5617944a3f385bd1" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.614609 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z9hn"] Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.617811 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8z9hn"] Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.621331 4957 scope.go:117] "RemoveContainer" containerID="a7e6bc2aab55216f18caf33a0c162440b21ac2f18e63af16e5aafb3853b89142" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.643394 4957 scope.go:117] "RemoveContainer" containerID="07f463a83e9e187f1fc22d7dfa64159c80edb71db00b6dc7d1723d34da687d76" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.654903 4957 scope.go:117] "RemoveContainer" containerID="953124c2090285e169ba584b73c984d350da1ae2294061f5dbb4318c804d30c8" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.665533 4957 scope.go:117] "RemoveContainer" containerID="5ebe020491ef13928310bdc553aae416acb9d22869f89c14924d2f8490dad84c" Nov 28 20:53:07 crc kubenswrapper[4957]: I1128 20:53:07.680192 4957 scope.go:117] "RemoveContainer" containerID="f86a1c043bb1217de98797c4e112e544f5b0985bf85073ed6e65627e076954c7" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.795546 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp4kq"] Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.797003 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qp4kq" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="registry-server" containerID="cri-o://ea4d2e7bba6e5a4d0a27d64c70cede7525674456a4d27498e5110f744f6ed10d" gracePeriod=2 Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.820375 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" path="/var/lib/kubelet/pods/1abfdb12-1213-4ec8-b2f0-c1b4bde073d9/volumes" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.821437 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" path="/var/lib/kubelet/pods/aed60da8-b12f-4fcd-81f6-8fbfcddf08b4/volumes" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.822045 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" path="/var/lib/kubelet/pods/bc7d1ffc-0e94-4aa1-8068-23d1083c57ce/volumes" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.992878 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.992958 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.993049 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.993829 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 20:53:08 crc kubenswrapper[4957]: I1128 20:53:08.993967 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb" gracePeriod=600 Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.599930 4957 generic.go:334] "Generic (PLEG): container finished" podID="a9881d17-454a-476d-903b-66b306a1290a" containerID="ea4d2e7bba6e5a4d0a27d64c70cede7525674456a4d27498e5110f744f6ed10d" exitCode=0 Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.599975 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerDied","Data":"ea4d2e7bba6e5a4d0a27d64c70cede7525674456a4d27498e5110f744f6ed10d"} Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.602296 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb" exitCode=0 Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.602341 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb"} Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.843121 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.992945 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qlpl\" (UniqueName: \"kubernetes.io/projected/a9881d17-454a-476d-903b-66b306a1290a-kube-api-access-2qlpl\") pod \"a9881d17-454a-476d-903b-66b306a1290a\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.992995 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-catalog-content\") pod \"a9881d17-454a-476d-903b-66b306a1290a\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.993023 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-utilities\") pod \"a9881d17-454a-476d-903b-66b306a1290a\" (UID: \"a9881d17-454a-476d-903b-66b306a1290a\") " Nov 28 20:53:11 crc kubenswrapper[4957]: I1128 20:53:11.994038 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-utilities" (OuterVolumeSpecName: "utilities") pod "a9881d17-454a-476d-903b-66b306a1290a" (UID: "a9881d17-454a-476d-903b-66b306a1290a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.007384 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9881d17-454a-476d-903b-66b306a1290a-kube-api-access-2qlpl" (OuterVolumeSpecName: "kube-api-access-2qlpl") pod "a9881d17-454a-476d-903b-66b306a1290a" (UID: "a9881d17-454a-476d-903b-66b306a1290a"). InnerVolumeSpecName "kube-api-access-2qlpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.094590 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qlpl\" (UniqueName: \"kubernetes.io/projected/a9881d17-454a-476d-903b-66b306a1290a-kube-api-access-2qlpl\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.094630 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.101654 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9881d17-454a-476d-903b-66b306a1290a" (UID: "a9881d17-454a-476d-903b-66b306a1290a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.195448 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9881d17-454a-476d-903b-66b306a1290a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.614390 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qp4kq" event={"ID":"a9881d17-454a-476d-903b-66b306a1290a","Type":"ContainerDied","Data":"ea9b88bc7567cd6b0cef30cea6627a72633177dfbc6809a7bb7c7278379ac914"} Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.614455 4957 scope.go:117] "RemoveContainer" containerID="ea4d2e7bba6e5a4d0a27d64c70cede7525674456a4d27498e5110f744f6ed10d" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.614624 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qp4kq" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.625017 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"49d72eb6f95332880907d7163c6aa8e342e2f384f389a618e0900e3a1f6ad954"} Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.643993 4957 scope.go:117] "RemoveContainer" containerID="c0f6b65ce236cde94743c7638d3496158665a612626f81059cccdabb98c462c4" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.654377 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qp4kq"] Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.657461 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qp4kq"] Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.678733 4957 scope.go:117] "RemoveContainer" containerID="8f49f8c7722d6e617ca7aefc54fd72bf1c1b8b6dd57e7700b6b7ac8029563d88" Nov 28 20:53:12 crc kubenswrapper[4957]: I1128 20:53:12.819038 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9881d17-454a-476d-903b-66b306a1290a" path="/var/lib/kubelet/pods/a9881d17-454a-476d-903b-66b306a1290a/volumes" Nov 28 20:53:13 crc kubenswrapper[4957]: I1128 20:53:13.720126 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9529v"] Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.852598 4957 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855831 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855856 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855872 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855880 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855891 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855897 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855907 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855913 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855923 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855929 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855937 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855943 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855951 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855957 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855973 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.855984 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.855998 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856006 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="extract-utilities" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856015 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856021 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856031 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856036 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="extract-content" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856044 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856049 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856147 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed60da8-b12f-4fcd-81f6-8fbfcddf08b4" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856164 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7d1ffc-0e94-4aa1-8068-23d1083c57ce" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856180 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9881d17-454a-476d-903b-66b306a1290a" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856189 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abfdb12-1213-4ec8-b2f0-c1b4bde073d9" containerName="registry-server" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856499 4957 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856522 4957 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856645 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856660 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856669 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856676 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856683 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856690 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856700 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856707 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856715 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856722 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856733 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856739 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.856747 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856754 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856868 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856879 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856887 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856897 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.856908 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.857097 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.858267 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.858814 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7" gracePeriod=15 Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.859839 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883" gracePeriod=15 Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.859988 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449" gracePeriod=15 Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.860030 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c" gracePeriod=15 Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.860091 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d" gracePeriod=15 Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.861327 4957 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 28 20:53:20 crc kubenswrapper[4957]: E1128 20:53:20.905525 4957 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.996870 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997169 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997203 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997231 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997249 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997304 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997437 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:20 crc kubenswrapper[4957]: I1128 20:53:20.997618 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098665 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098727 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098758 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098781 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098793 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098839 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098834 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098881 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098953 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.098996 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.099034 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.099041 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.099093 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.099099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.099119 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.206474 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: E1128 20:53:21.227549 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c46fbc7e75021 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 20:53:21.226952737 +0000 UTC m=+240.695600646,LastTimestamp:2025-11-28 20:53:21.226952737 +0000 UTC m=+240.695600646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.699628 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a7ccd8506bf227816eab84f2385ae17c3b47c4d2be0d5a8f90e45d99989325ab"} Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.699724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9b604d7961a11ad5f4992f1678e4a079385e12b07b6937ec676aa952f77285b6"} Nov 28 20:53:21 crc kubenswrapper[4957]: E1128 20:53:21.700854 4957 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.704148 4957 generic.go:334] "Generic (PLEG): container finished" podID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" containerID="da05f2889023bb8d7058e4f5a46fbb5469a8cb4dc98379918cf11aaea11f23b9" exitCode=0 Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.704193 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba","Type":"ContainerDied","Data":"da05f2889023bb8d7058e4f5a46fbb5469a8cb4dc98379918cf11aaea11f23b9"} Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.705042 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.707823 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.709834 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.710915 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883" exitCode=0 Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.710937 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449" exitCode=0 Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.710945 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c" exitCode=0 Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.710952 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d" exitCode=2 Nov 28 20:53:21 crc kubenswrapper[4957]: I1128 20:53:21.710997 4957 scope.go:117] "RemoveContainer" containerID="bb383272b8061b3b843386f1d9fd732c37e6722a2ec83d280c5965f841688085" Nov 28 20:53:22 crc kubenswrapper[4957]: I1128 20:53:22.719738 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 20:53:22 crc kubenswrapper[4957]: I1128 20:53:22.982165 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:53:22 crc kubenswrapper[4957]: I1128 20:53:22.983260 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.122593 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-var-lock\") pod \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.122923 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kube-api-access\") pod \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.122761 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-var-lock" (OuterVolumeSpecName: "var-lock") pod "60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" (UID: "60d6e1ca-67ab-4fe9-b861-65930c4ff0ba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.122998 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kubelet-dir\") pod \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\" (UID: \"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba\") " Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.123028 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" (UID: "60d6e1ca-67ab-4fe9-b861-65930c4ff0ba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.123575 4957 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.123602 4957 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.130282 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" (UID: "60d6e1ca-67ab-4fe9-b861-65930c4ff0ba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.209723 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.210549 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.211108 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.211386 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.225459 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d6e1ca-67ab-4fe9-b861-65930c4ff0ba-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.326766 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.326847 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.326915 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.326954 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.326992 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.327080 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.327714 4957 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.327736 4957 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.327744 4957 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.728190 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.728931 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7" exitCode=0 Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.729070 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.729348 4957 scope.go:117] "RemoveContainer" containerID="0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.730928 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60d6e1ca-67ab-4fe9-b861-65930c4ff0ba","Type":"ContainerDied","Data":"14147d72bdeadff7f65a3cb5b495ce09bf1cad8532067ae18bb6001dccd9d619"} Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.730961 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14147d72bdeadff7f65a3cb5b495ce09bf1cad8532067ae18bb6001dccd9d619" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.730990 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.744920 4957 scope.go:117] "RemoveContainer" containerID="7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.746966 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.747161 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.774446 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.774749 4957 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.797480 4957 scope.go:117] "RemoveContainer" containerID="72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.821867 4957 scope.go:117] "RemoveContainer" containerID="f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.842290 4957 scope.go:117] "RemoveContainer" containerID="55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.845093 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:53:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:53:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:53:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T20:53:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.845423 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.845688 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.845927 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.846177 4957 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.846204 4957 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.864267 4957 scope.go:117] "RemoveContainer" containerID="baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.906202 4957 scope.go:117] "RemoveContainer" containerID="0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.906848 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\": container with ID starting with 0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883 not found: ID does not exist" containerID="0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.906885 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883"} err="failed to get container status \"0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\": rpc error: code = NotFound desc = could not find container \"0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883\": container with ID starting with 0d947c19e0b8f519bcfa299a21d26b8e71b0b4c58b38840700b94d92c3f16883 not found: ID does not exist" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.906911 4957 scope.go:117] "RemoveContainer" containerID="7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.907183 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\": container with ID starting with 7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449 not found: ID does not exist" containerID="7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.907232 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449"} err="failed to get container status \"7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\": rpc error: code = NotFound desc = could not find container \"7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449\": container with ID starting with 7f13f75d0991486492272f3045c91630510f53dc64e6bccfb15f7b9ba058e449 not found: ID does not exist" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.907249 4957 scope.go:117] "RemoveContainer" containerID="72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.908304 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\": container with ID starting with 72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c not found: ID does not exist" containerID="72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.908334 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c"} err="failed to get container status \"72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\": rpc error: code = NotFound desc = could not find container \"72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c\": container with ID starting with 72602d87d36dc8371ec05bc464b42a65bc247c91919a252af75712161080800c not found: ID does not exist" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.908351 4957 scope.go:117] "RemoveContainer" containerID="f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.908886 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\": container with ID starting with f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d not found: ID does not exist" containerID="f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.908921 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d"} err="failed to get container status \"f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\": rpc error: code = NotFound desc = could not find container \"f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d\": container with ID starting with f60335a6e174aa35d0a5666560cea0ef025b8e77ae49bbb60338eb1512a3d22d not found: ID does not exist" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.908942 4957 scope.go:117] "RemoveContainer" containerID="55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.909370 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\": container with ID starting with 55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7 not found: ID does not exist" containerID="55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.909472 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7"} err="failed to get container status \"55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\": rpc error: code = NotFound desc = could not find container \"55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7\": container with ID starting with 55a7a1a6ec5adf0a582e9f8910a674ef61dccc0a82861a41cface49902de61f7 not found: ID does not exist" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.909557 4957 scope.go:117] "RemoveContainer" containerID="baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0" Nov 28 20:53:23 crc kubenswrapper[4957]: E1128 20:53:23.912203 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\": container with ID starting with baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0 not found: ID does not exist" containerID="baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0" Nov 28 20:53:23 crc kubenswrapper[4957]: I1128 20:53:23.912253 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0"} err="failed to get container status \"baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\": rpc error: code = NotFound desc = could not find container \"baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0\": container with ID starting with baf0bed25e70f630c81438f5534c11d01691ca15397b7718b8a10eccf5bad8f0 not found: ID does not exist" Nov 28 20:53:24 crc kubenswrapper[4957]: I1128 20:53:24.819449 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.485709 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.486424 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.486756 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.486962 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.487149 4957 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:26 crc kubenswrapper[4957]: I1128 20:53:26.487174 4957 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.487399 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="200ms" Nov 28 20:53:26 crc kubenswrapper[4957]: E1128 20:53:26.687993 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="400ms" Nov 28 20:53:27 crc kubenswrapper[4957]: E1128 20:53:27.088550 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="800ms" Nov 28 20:53:27 crc kubenswrapper[4957]: E1128 20:53:27.889651 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="1.6s" Nov 28 20:53:28 crc kubenswrapper[4957]: E1128 20:53:28.014737 4957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c46fbc7e75021 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 20:53:21.226952737 +0000 UTC m=+240.695600646,LastTimestamp:2025-11-28 20:53:21.226952737 +0000 UTC m=+240.695600646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 20:53:29 crc kubenswrapper[4957]: E1128 20:53:29.490456 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="3.2s" Nov 28 20:53:30 crc kubenswrapper[4957]: I1128 20:53:30.816125 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:31 crc kubenswrapper[4957]: I1128 20:53:31.812049 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:31 crc kubenswrapper[4957]: I1128 20:53:31.814273 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:31 crc kubenswrapper[4957]: I1128 20:53:31.841091 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:31 crc kubenswrapper[4957]: I1128 20:53:31.841555 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:31 crc kubenswrapper[4957]: E1128 20:53:31.842137 4957 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:31 crc kubenswrapper[4957]: I1128 20:53:31.843086 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:32 crc kubenswrapper[4957]: E1128 20:53:32.700322 4957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="6.4s" Nov 28 20:53:32 crc kubenswrapper[4957]: I1128 20:53:32.816394 4957 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a8c45977549601fd3474ce72bcaea37a2b63bb29de2342f442430722e974b6b4" exitCode=0 Nov 28 20:53:32 crc kubenswrapper[4957]: I1128 20:53:32.827515 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a8c45977549601fd3474ce72bcaea37a2b63bb29de2342f442430722e974b6b4"} Nov 28 20:53:32 crc kubenswrapper[4957]: I1128 20:53:32.827608 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c19fed6bf0b4eb6cef9163c4079c0354317251dface528e71019507f0078d8cf"} Nov 28 20:53:32 crc kubenswrapper[4957]: I1128 20:53:32.828162 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:32 crc kubenswrapper[4957]: I1128 20:53:32.828244 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:32 crc kubenswrapper[4957]: E1128 20:53:32.828982 4957 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:32 crc kubenswrapper[4957]: I1128 20:53:32.828997 4957 status_manager.go:851] "Failed to get status for pod" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Nov 28 20:53:33 crc kubenswrapper[4957]: I1128 20:53:33.829011 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7b9c0a6231e8937bf45334475e774ec8ec381031cfb8e3fe1034f4a171ba85a9"} Nov 28 20:53:33 crc kubenswrapper[4957]: I1128 20:53:33.829649 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eb6ae40c4d10bbc56e064718c810be1eba84ae66ea6a498227f59ef844408edb"} Nov 28 20:53:33 crc kubenswrapper[4957]: I1128 20:53:33.832945 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 20:53:33 crc kubenswrapper[4957]: I1128 20:53:33.832980 4957 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d" exitCode=1 Nov 28 20:53:33 crc kubenswrapper[4957]: I1128 20:53:33.833000 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d"} Nov 28 20:53:33 crc kubenswrapper[4957]: I1128 20:53:33.833419 4957 scope.go:117] "RemoveContainer" containerID="c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d" Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.844241 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.845182 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d1342c5746de9786578a0dec72f74eb7c0f9d014e725c71c0dfc185092cd50b5"} Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.850424 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7601a99ae7667a8eb99e0717ee8f39c6d43ec9e45e440f62548561834e00e3c3"} Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.850468 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"262ff1d29a093d6282664378e08c071e1ba2d557ef812001c79fb31c5aab55f4"} Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.850479 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cdbc28f18835a6d4ed1795dcd22786b64f2bec51ee8a1f7432e2225debb6a7c2"} Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.850733 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.850751 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:34 crc kubenswrapper[4957]: I1128 20:53:34.850887 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:36 crc kubenswrapper[4957]: I1128 20:53:36.843811 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:36 crc kubenswrapper[4957]: I1128 20:53:36.843889 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:36 crc kubenswrapper[4957]: I1128 20:53:36.852197 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:37 crc kubenswrapper[4957]: I1128 20:53:37.360039 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:53:37 crc kubenswrapper[4957]: I1128 20:53:37.360351 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 20:53:37 crc kubenswrapper[4957]: I1128 20:53:37.360406 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 20:53:38 crc kubenswrapper[4957]: I1128 20:53:38.772015 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" podUID="8a1acf6a-47e9-482f-88e7-87d508ec3b4b" containerName="oauth-openshift" containerID="cri-o://33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe" gracePeriod=15 Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.192266 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.225420 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dqq9\" (UniqueName: \"kubernetes.io/projected/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-kube-api-access-8dqq9\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.225474 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-login\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.225503 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-serving-cert\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.233065 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-kube-api-access-8dqq9" (OuterVolumeSpecName: "kube-api-access-8dqq9") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "kube-api-access-8dqq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.237490 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.237710 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.326870 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-session\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.326913 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-ocp-branding-template\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.326928 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-idp-0-file-data\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.326944 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-dir\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.326963 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-cliconfig\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.326980 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-router-certs\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327000 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-provider-selection\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327025 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-trusted-ca-bundle\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327060 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-error\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327077 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-service-ca\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327108 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-policies\") pod \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\" (UID: \"8a1acf6a-47e9-482f-88e7-87d508ec3b4b\") " Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327282 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dqq9\" (UniqueName: \"kubernetes.io/projected/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-kube-api-access-8dqq9\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327299 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327313 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327920 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327928 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.327982 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.328030 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.328222 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.330311 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.330683 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.330767 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.330886 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.331137 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.331166 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8a1acf6a-47e9-482f-88e7-87d508ec3b4b" (UID: "8a1acf6a-47e9-482f-88e7-87d508ec3b4b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428015 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428057 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428070 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428080 4957 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428091 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428100 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428110 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428120 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428129 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428137 4957 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.428146 4957 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a1acf6a-47e9-482f-88e7-87d508ec3b4b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.882547 4957 generic.go:334] "Generic (PLEG): container finished" podID="8a1acf6a-47e9-482f-88e7-87d508ec3b4b" containerID="33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe" exitCode=0 Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.882771 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" event={"ID":"8a1acf6a-47e9-482f-88e7-87d508ec3b4b","Type":"ContainerDied","Data":"33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe"} Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.883596 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" event={"ID":"8a1acf6a-47e9-482f-88e7-87d508ec3b4b","Type":"ContainerDied","Data":"08e3af4c0bee3d849c72e61f45134ec42ebb0f73d360a1bbaf0daf1515f8c263"} Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.882879 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9529v" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.883679 4957 scope.go:117] "RemoveContainer" containerID="33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.904676 4957 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.910118 4957 scope.go:117] "RemoveContainer" containerID="33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe" Nov 28 20:53:39 crc kubenswrapper[4957]: E1128 20:53:39.911134 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe\": container with ID starting with 33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe not found: ID does not exist" containerID="33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe" Nov 28 20:53:39 crc kubenswrapper[4957]: I1128 20:53:39.911176 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe"} err="failed to get container status \"33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe\": rpc error: code = NotFound desc = could not find container \"33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe\": container with ID starting with 33eb8e78040a7ebeef130cd463d6786cf552e5d368bae573fb4024de7d4c23fe not found: ID does not exist" Nov 28 20:53:40 crc kubenswrapper[4957]: I1128 20:53:40.826502 4957 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a699a766-8713-4962-8f33-432fbf5f9283" Nov 28 20:53:40 crc kubenswrapper[4957]: I1128 20:53:40.890294 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:40 crc kubenswrapper[4957]: I1128 20:53:40.890320 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:40 crc kubenswrapper[4957]: I1128 20:53:40.893407 4957 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a699a766-8713-4962-8f33-432fbf5f9283" Nov 28 20:53:40 crc kubenswrapper[4957]: I1128 20:53:40.896744 4957 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://eb6ae40c4d10bbc56e064718c810be1eba84ae66ea6a498227f59ef844408edb" Nov 28 20:53:40 crc kubenswrapper[4957]: I1128 20:53:40.896870 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:53:41 crc kubenswrapper[4957]: I1128 20:53:41.896635 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:41 crc kubenswrapper[4957]: I1128 20:53:41.898708 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:53:41 crc kubenswrapper[4957]: I1128 20:53:41.899486 4957 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a699a766-8713-4962-8f33-432fbf5f9283" Nov 28 20:53:43 crc kubenswrapper[4957]: I1128 20:53:43.655912 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:53:47 crc kubenswrapper[4957]: I1128 20:53:47.360979 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 20:53:47 crc kubenswrapper[4957]: I1128 20:53:47.361754 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 20:53:50 crc kubenswrapper[4957]: I1128 20:53:50.246122 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 20:53:50 crc kubenswrapper[4957]: I1128 20:53:50.401041 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 20:53:50 crc kubenswrapper[4957]: I1128 20:53:50.839200 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 20:53:50 crc kubenswrapper[4957]: I1128 20:53:50.930778 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 20:53:52 crc kubenswrapper[4957]: I1128 20:53:52.054253 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 20:53:52 crc kubenswrapper[4957]: I1128 20:53:52.264865 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 20:53:52 crc kubenswrapper[4957]: I1128 20:53:52.839193 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.284269 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.293034 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.318083 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.372078 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.416142 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.600164 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.691058 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 20:53:53 crc kubenswrapper[4957]: I1128 20:53:53.993601 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.017721 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.022348 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.134093 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.159772 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.191085 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.241672 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.489412 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.735640 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.769318 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 20:53:54 crc kubenswrapper[4957]: I1128 20:53:54.814958 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.027949 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.106706 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.163114 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.213572 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.245618 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.397392 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.432493 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.515650 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.573403 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.578381 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.654576 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.679740 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.689013 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.693250 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.714892 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.886576 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 20:53:55 crc kubenswrapper[4957]: I1128 20:53:55.984473 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.082151 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.163741 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.165001 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.274104 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.313630 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.323970 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.350041 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.437601 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.449868 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.480929 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.538265 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.573668 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.624377 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.650011 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.735268 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.868152 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 20:53:56 crc kubenswrapper[4957]: I1128 20:53:56.947340 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.066599 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.079424 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.125364 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.150012 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.345672 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.360740 4957 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.360796 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.360844 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.362065 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"d1342c5746de9786578a0dec72f74eb7c0f9d014e725c71c0dfc185092cd50b5"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.362186 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://d1342c5746de9786578a0dec72f74eb7c0f9d014e725c71c0dfc185092cd50b5" gracePeriod=30 Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.382443 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.388481 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.464626 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.533202 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.581651 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.701345 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.706072 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.721384 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.775121 4957 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.848886 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 20:53:57 crc kubenswrapper[4957]: I1128 20:53:57.965535 4957 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.082303 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.104172 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.137587 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.158317 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.188719 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.213490 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.297285 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.415020 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.469854 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.476394 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.487791 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.533692 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.636184 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.683080 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.696594 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.697185 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.766497 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.796314 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 20:53:58 crc kubenswrapper[4957]: I1128 20:53:58.805860 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.016362 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.024107 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.033414 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.098498 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.248928 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.259690 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.303316 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.349609 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.354328 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.497488 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.568284 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.718023 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.823438 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.861929 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.861943 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.881544 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.926342 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.948234 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 20:53:59 crc kubenswrapper[4957]: I1128 20:53:59.973383 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.153606 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.316304 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.322499 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.366414 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.510635 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.682001 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.725542 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.755587 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.798604 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 20:54:00 crc kubenswrapper[4957]: I1128 20:54:00.920596 4957 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.012365 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.021247 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.079687 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.123739 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.239722 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.307410 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.377440 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.444378 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.517403 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.517994 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.652777 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.678985 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.702500 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.707027 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.718528 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.765488 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.790760 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.823914 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.858003 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.881518 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.918465 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.956921 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.980805 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 20:54:01 crc kubenswrapper[4957]: I1128 20:54:01.993515 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.023961 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.029787 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.036329 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.078782 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.183324 4957 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187381 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-9529v"] Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187440 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-f578d5c8f-6w5zv"] Nov 28 20:54:02 crc kubenswrapper[4957]: E1128 20:54:02.187623 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1acf6a-47e9-482f-88e7-87d508ec3b4b" containerName="oauth-openshift" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187635 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1acf6a-47e9-482f-88e7-87d508ec3b4b" containerName="oauth-openshift" Nov 28 20:54:02 crc kubenswrapper[4957]: E1128 20:54:02.187643 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" containerName="installer" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187651 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" containerName="installer" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187753 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d6e1ca-67ab-4fe9-b861-65930c4ff0ba" containerName="installer" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187765 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1acf6a-47e9-482f-88e7-87d508ec3b4b" containerName="oauth-openshift" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187819 4957 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.187839 4957 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="32a8b44e-3899-49e2-b3c2-20f8559964bb" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.188130 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.191307 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.194325 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.194760 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.194928 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.195031 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.195564 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.195668 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.195796 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.195952 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.196164 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.196320 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.196371 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.196833 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.202754 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.218640 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.222818 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.222797591 podStartE2EDuration="23.222797591s" podCreationTimestamp="2025-11-28 20:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:02.214067986 +0000 UTC m=+281.682715895" watchObservedRunningTime="2025-11-28 20:54:02.222797591 +0000 UTC m=+281.691445500" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.227446 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317235 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-audit-dir\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317285 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317314 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317334 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-audit-policies\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317483 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317528 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317565 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pwt\" (UniqueName: \"kubernetes.io/projected/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-kube-api-access-m2pwt\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317587 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-session\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317615 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317671 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-login\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317686 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-error\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317724 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317748 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.317766 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.338375 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.351876 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-login\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418717 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-error\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418745 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418765 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418785 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418805 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-audit-dir\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418837 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418867 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418889 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-audit-policies\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418936 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418938 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-audit-dir\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.418958 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.419060 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2pwt\" (UniqueName: \"kubernetes.io/projected/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-kube-api-access-m2pwt\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.419083 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-session\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.419116 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.419662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.419664 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-audit-policies\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.420157 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.420200 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.424249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-session\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.424289 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-error\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.424552 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.424922 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.425712 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-login\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.426207 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.427187 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.427348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.435010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2pwt\" (UniqueName: \"kubernetes.io/projected/aa6c75a9-aa6f-4714-b7ae-50c3089467ae-kube-api-access-m2pwt\") pod \"oauth-openshift-f578d5c8f-6w5zv\" (UID: \"aa6c75a9-aa6f-4714-b7ae-50c3089467ae\") " pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.508846 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.509968 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.584845 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.595572 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.604993 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.703541 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.747204 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.817705 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.818788 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1acf6a-47e9-482f-88e7-87d508ec3b4b" path="/var/lib/kubelet/pods/8a1acf6a-47e9-482f-88e7-87d508ec3b4b/volumes" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.836424 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.917913 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.925414 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.959969 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.977442 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 20:54:02 crc kubenswrapper[4957]: I1128 20:54:02.991645 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.045349 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.095348 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.130958 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.182321 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.247079 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.261682 4957 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.354527 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.391843 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.404405 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.432615 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.450051 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.520179 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.532173 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.539724 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.645732 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.716027 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.742528 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.847607 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.855477 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.914886 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 20:54:03 crc kubenswrapper[4957]: I1128 20:54:03.998575 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.020441 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.247347 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.305672 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.568477 4957 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.590123 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.659134 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.682551 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.720181 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.813117 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.813552 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 20:54:04 crc kubenswrapper[4957]: I1128 20:54:04.996696 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.019846 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.094805 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.108066 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.150638 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.193778 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.236768 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.285118 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.393183 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.429142 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.475971 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.490575 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.676953 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f578d5c8f-6w5zv"] Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.683832 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.761703 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.840517 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.867372 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.944362 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 20:54:05 crc kubenswrapper[4957]: I1128 20:54:05.945826 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f578d5c8f-6w5zv"] Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.032437 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.037651 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" event={"ID":"aa6c75a9-aa6f-4714-b7ae-50c3089467ae","Type":"ContainerStarted","Data":"354897b11abe722abc3d7f09d170a4e1c1f9bc489b6cf8a5cfd831f752a11b61"} Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.047136 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.056859 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.317239 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.391418 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.412648 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.553417 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.931829 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.947157 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.953689 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 20:54:06 crc kubenswrapper[4957]: I1128 20:54:06.998324 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.044379 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" event={"ID":"aa6c75a9-aa6f-4714-b7ae-50c3089467ae","Type":"ContainerStarted","Data":"d4955e04c70f6cccaeb83b0e1668cb5a0be02f428f3a9d612a7b81daf8618869"} Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.044793 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.049803 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.055522 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.070912 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f578d5c8f-6w5zv" podStartSLOduration=54.070895608 podStartE2EDuration="54.070895608s" podCreationTimestamp="2025-11-28 20:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:07.070037785 +0000 UTC m=+286.538685694" watchObservedRunningTime="2025-11-28 20:54:07.070895608 +0000 UTC m=+286.539543517" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.205508 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.221452 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.643905 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 20:54:07 crc kubenswrapper[4957]: I1128 20:54:07.919782 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 20:54:08 crc kubenswrapper[4957]: I1128 20:54:08.071314 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 20:54:08 crc kubenswrapper[4957]: I1128 20:54:08.348593 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 20:54:08 crc kubenswrapper[4957]: I1128 20:54:08.595068 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 20:54:08 crc kubenswrapper[4957]: I1128 20:54:08.975175 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 20:54:09 crc kubenswrapper[4957]: I1128 20:54:09.023736 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 20:54:09 crc kubenswrapper[4957]: I1128 20:54:09.616086 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 20:54:12 crc kubenswrapper[4957]: I1128 20:54:12.654401 4957 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 20:54:12 crc kubenswrapper[4957]: I1128 20:54:12.654604 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a7ccd8506bf227816eab84f2385ae17c3b47c4d2be0d5a8f90e45d99989325ab" gracePeriod=5 Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.104464 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.104893 4957 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a7ccd8506bf227816eab84f2385ae17c3b47c4d2be0d5a8f90e45d99989325ab" exitCode=137 Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.242725 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.243138 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426739 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426802 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426864 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426895 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426982 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.427012 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.426997 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.427055 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.427337 4957 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.427383 4957 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.427404 4957 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.427424 4957 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.434499 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.528492 4957 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:18 crc kubenswrapper[4957]: I1128 20:54:18.820430 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 20:54:19 crc kubenswrapper[4957]: I1128 20:54:19.112625 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 20:54:19 crc kubenswrapper[4957]: I1128 20:54:19.112725 4957 scope.go:117] "RemoveContainer" containerID="a7ccd8506bf227816eab84f2385ae17c3b47c4d2be0d5a8f90e45d99989325ab" Nov 28 20:54:19 crc kubenswrapper[4957]: I1128 20:54:19.112809 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 20:54:20 crc kubenswrapper[4957]: I1128 20:54:20.669925 4957 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 28 20:54:28 crc kubenswrapper[4957]: I1128 20:54:28.170411 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 28 20:54:28 crc kubenswrapper[4957]: I1128 20:54:28.173808 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 20:54:28 crc kubenswrapper[4957]: I1128 20:54:28.173952 4957 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d1342c5746de9786578a0dec72f74eb7c0f9d014e725c71c0dfc185092cd50b5" exitCode=137 Nov 28 20:54:28 crc kubenswrapper[4957]: I1128 20:54:28.173994 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d1342c5746de9786578a0dec72f74eb7c0f9d014e725c71c0dfc185092cd50b5"} Nov 28 20:54:28 crc kubenswrapper[4957]: I1128 20:54:28.174032 4957 scope.go:117] "RemoveContainer" containerID="c8e9a1b0dee9af9e9c9fa0ae2eecca6814649ba376538c63e33456b0a0c2969d" Nov 28 20:54:29 crc kubenswrapper[4957]: I1128 20:54:29.182306 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 28 20:54:29 crc kubenswrapper[4957]: I1128 20:54:29.183421 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cdf1b45f399af67690fa79c13a2082c11674f18cca402e0c808b7933d1437991"} Nov 28 20:54:33 crc kubenswrapper[4957]: I1128 20:54:33.655981 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:54:37 crc kubenswrapper[4957]: I1128 20:54:37.360013 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:54:37 crc kubenswrapper[4957]: I1128 20:54:37.367407 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:54:38 crc kubenswrapper[4957]: I1128 20:54:38.239724 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 20:54:44 crc kubenswrapper[4957]: I1128 20:54:44.691347 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk"] Nov 28 20:54:44 crc kubenswrapper[4957]: I1128 20:54:44.692571 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" podUID="f84ca592-04c4-4edf-a398-0f879254007f" containerName="route-controller-manager" containerID="cri-o://22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52" gracePeriod=30 Nov 28 20:54:44 crc kubenswrapper[4957]: I1128 20:54:44.699410 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hg84w"] Nov 28 20:54:44 crc kubenswrapper[4957]: I1128 20:54:44.699782 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" podUID="eff2527a-b897-47e0-92ac-f9319119ee43" containerName="controller-manager" containerID="cri-o://e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e" gracePeriod=30 Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.048145 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.114789 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.159876 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-client-ca\") pod \"f84ca592-04c4-4edf-a398-0f879254007f\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.159931 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-config\") pod \"f84ca592-04c4-4edf-a398-0f879254007f\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.160004 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shckd\" (UniqueName: \"kubernetes.io/projected/f84ca592-04c4-4edf-a398-0f879254007f-kube-api-access-shckd\") pod \"f84ca592-04c4-4edf-a398-0f879254007f\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.160078 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f84ca592-04c4-4edf-a398-0f879254007f-serving-cert\") pod \"f84ca592-04c4-4edf-a398-0f879254007f\" (UID: \"f84ca592-04c4-4edf-a398-0f879254007f\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.160716 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f84ca592-04c4-4edf-a398-0f879254007f" (UID: "f84ca592-04c4-4edf-a398-0f879254007f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.161297 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.164887 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-config" (OuterVolumeSpecName: "config") pod "f84ca592-04c4-4edf-a398-0f879254007f" (UID: "f84ca592-04c4-4edf-a398-0f879254007f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.166179 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84ca592-04c4-4edf-a398-0f879254007f-kube-api-access-shckd" (OuterVolumeSpecName: "kube-api-access-shckd") pod "f84ca592-04c4-4edf-a398-0f879254007f" (UID: "f84ca592-04c4-4edf-a398-0f879254007f"). InnerVolumeSpecName "kube-api-access-shckd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.168584 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84ca592-04c4-4edf-a398-0f879254007f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f84ca592-04c4-4edf-a398-0f879254007f" (UID: "f84ca592-04c4-4edf-a398-0f879254007f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.265483 4957 generic.go:334] "Generic (PLEG): container finished" podID="eff2527a-b897-47e0-92ac-f9319119ee43" containerID="e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e" exitCode=0 Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.265547 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" event={"ID":"eff2527a-b897-47e0-92ac-f9319119ee43","Type":"ContainerDied","Data":"e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e"} Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.265578 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" event={"ID":"eff2527a-b897-47e0-92ac-f9319119ee43","Type":"ContainerDied","Data":"31f3f98422433f50ab9761a9984ba64712842e5fc8e13a032788508c1a53eb3d"} Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.265597 4957 scope.go:117] "RemoveContainer" containerID="e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.265710 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hg84w" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289111 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb7q9\" (UniqueName: \"kubernetes.io/projected/eff2527a-b897-47e0-92ac-f9319119ee43-kube-api-access-rb7q9\") pod \"eff2527a-b897-47e0-92ac-f9319119ee43\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289182 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-config\") pod \"eff2527a-b897-47e0-92ac-f9319119ee43\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289262 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff2527a-b897-47e0-92ac-f9319119ee43-serving-cert\") pod \"eff2527a-b897-47e0-92ac-f9319119ee43\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289285 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-client-ca\") pod \"eff2527a-b897-47e0-92ac-f9319119ee43\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289349 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-proxy-ca-bundles\") pod \"eff2527a-b897-47e0-92ac-f9319119ee43\" (UID: \"eff2527a-b897-47e0-92ac-f9319119ee43\") " Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289544 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f84ca592-04c4-4edf-a398-0f879254007f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289556 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f84ca592-04c4-4edf-a398-0f879254007f-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289565 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shckd\" (UniqueName: \"kubernetes.io/projected/f84ca592-04c4-4edf-a398-0f879254007f-kube-api-access-shckd\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289740 4957 generic.go:334] "Generic (PLEG): container finished" podID="f84ca592-04c4-4edf-a398-0f879254007f" containerID="22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52" exitCode=0 Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289777 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" event={"ID":"f84ca592-04c4-4edf-a398-0f879254007f","Type":"ContainerDied","Data":"22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52"} Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289811 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" event={"ID":"f84ca592-04c4-4edf-a398-0f879254007f","Type":"ContainerDied","Data":"27b6f50fba8461a80a5af5d2bf417a4f9045a81b3e4306b1f9a2517240090210"} Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289883 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289962 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eff2527a-b897-47e0-92ac-f9319119ee43" (UID: "eff2527a-b897-47e0-92ac-f9319119ee43"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.289984 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-client-ca" (OuterVolumeSpecName: "client-ca") pod "eff2527a-b897-47e0-92ac-f9319119ee43" (UID: "eff2527a-b897-47e0-92ac-f9319119ee43"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.290025 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-config" (OuterVolumeSpecName: "config") pod "eff2527a-b897-47e0-92ac-f9319119ee43" (UID: "eff2527a-b897-47e0-92ac-f9319119ee43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.293916 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff2527a-b897-47e0-92ac-f9319119ee43-kube-api-access-rb7q9" (OuterVolumeSpecName: "kube-api-access-rb7q9") pod "eff2527a-b897-47e0-92ac-f9319119ee43" (UID: "eff2527a-b897-47e0-92ac-f9319119ee43"). InnerVolumeSpecName "kube-api-access-rb7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.294910 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff2527a-b897-47e0-92ac-f9319119ee43-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eff2527a-b897-47e0-92ac-f9319119ee43" (UID: "eff2527a-b897-47e0-92ac-f9319119ee43"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.305559 4957 scope.go:117] "RemoveContainer" containerID="e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e" Nov 28 20:54:45 crc kubenswrapper[4957]: E1128 20:54:45.305999 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e\": container with ID starting with e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e not found: ID does not exist" containerID="e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.306044 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e"} err="failed to get container status \"e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e\": rpc error: code = NotFound desc = could not find container \"e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e\": container with ID starting with e14f26f2d1428eb87ceeff31a4582084a83414b5b3eca408f2555ab377a9e37e not found: ID does not exist" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.306071 4957 scope.go:117] "RemoveContainer" containerID="22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.317602 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk"] Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.319685 4957 scope.go:117] "RemoveContainer" containerID="22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52" Nov 28 20:54:45 crc kubenswrapper[4957]: E1128 20:54:45.320115 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52\": container with ID starting with 22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52 not found: ID does not exist" containerID="22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.320158 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52"} err="failed to get container status \"22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52\": rpc error: code = NotFound desc = could not find container \"22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52\": container with ID starting with 22d32b6024ae7c6d8df1f69617706e365c772a9de9259a77472f6834ed818f52 not found: ID does not exist" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.323330 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5tzvk"] Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.390289 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb7q9\" (UniqueName: \"kubernetes.io/projected/eff2527a-b897-47e0-92ac-f9319119ee43-kube-api-access-rb7q9\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.390522 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.390595 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff2527a-b897-47e0-92ac-f9319119ee43-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.390658 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.390713 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff2527a-b897-47e0-92ac-f9319119ee43-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.589138 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hg84w"] Nov 28 20:54:45 crc kubenswrapper[4957]: I1128 20:54:45.592434 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hg84w"] Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.107664 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf"] Nov 28 20:54:46 crc kubenswrapper[4957]: E1128 20:54:46.107907 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff2527a-b897-47e0-92ac-f9319119ee43" containerName="controller-manager" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.107920 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff2527a-b897-47e0-92ac-f9319119ee43" containerName="controller-manager" Nov 28 20:54:46 crc kubenswrapper[4957]: E1128 20:54:46.107932 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84ca592-04c4-4edf-a398-0f879254007f" containerName="route-controller-manager" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.107938 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84ca592-04c4-4edf-a398-0f879254007f" containerName="route-controller-manager" Nov 28 20:54:46 crc kubenswrapper[4957]: E1128 20:54:46.107953 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.107961 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.108071 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84ca592-04c4-4edf-a398-0f879254007f" containerName="route-controller-manager" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.108086 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff2527a-b897-47e0-92ac-f9319119ee43" containerName="controller-manager" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.108094 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.108546 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.110673 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk"] Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.111174 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.111318 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.113473 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114066 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114092 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114156 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114395 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114435 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114473 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114531 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.114630 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.115105 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.121696 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.125863 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk"] Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.126614 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.128710 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf"] Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301306 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-proxy-ca-bundles\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301361 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-serving-cert\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301381 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl95s\" (UniqueName: \"kubernetes.io/projected/275c33c7-f8c3-46fa-9b47-eb572b561adb-kube-api-access-tl95s\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301401 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-client-ca\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301417 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-config\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301441 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mw5\" (UniqueName: \"kubernetes.io/projected/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-kube-api-access-x6mw5\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301459 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-config\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301478 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-client-ca\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.301493 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/275c33c7-f8c3-46fa-9b47-eb572b561adb-serving-cert\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402290 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-serving-cert\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402335 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl95s\" (UniqueName: \"kubernetes.io/projected/275c33c7-f8c3-46fa-9b47-eb572b561adb-kube-api-access-tl95s\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402355 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-config\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-client-ca\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402396 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mw5\" (UniqueName: \"kubernetes.io/projected/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-kube-api-access-x6mw5\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402413 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-config\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402432 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-client-ca\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402450 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/275c33c7-f8c3-46fa-9b47-eb572b561adb-serving-cert\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.402848 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-proxy-ca-bundles\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.403437 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-client-ca\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.403776 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-config\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.404050 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-config\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.404070 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-proxy-ca-bundles\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.404554 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-client-ca\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.406075 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-serving-cert\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.406223 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/275c33c7-f8c3-46fa-9b47-eb572b561adb-serving-cert\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.418403 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mw5\" (UniqueName: \"kubernetes.io/projected/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-kube-api-access-x6mw5\") pod \"controller-manager-7766fdf5d9-wk6dk\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.420582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl95s\" (UniqueName: \"kubernetes.io/projected/275c33c7-f8c3-46fa-9b47-eb572b561adb-kube-api-access-tl95s\") pod \"route-controller-manager-6ccb45c985-wj9zf\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.424863 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.434112 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.628319 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf"] Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.678482 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk"] Nov 28 20:54:46 crc kubenswrapper[4957]: W1128 20:54:46.694238 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd5c39d7_0ae0_418d_be5c_2bfb393ce368.slice/crio-9d2ba1d5c095349cc708cf9518093727976bd051b356d3250e1754406e2b5fa0 WatchSource:0}: Error finding container 9d2ba1d5c095349cc708cf9518093727976bd051b356d3250e1754406e2b5fa0: Status 404 returned error can't find the container with id 9d2ba1d5c095349cc708cf9518093727976bd051b356d3250e1754406e2b5fa0 Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.822065 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff2527a-b897-47e0-92ac-f9319119ee43" path="/var/lib/kubelet/pods/eff2527a-b897-47e0-92ac-f9319119ee43/volumes" Nov 28 20:54:46 crc kubenswrapper[4957]: I1128 20:54:46.823114 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f84ca592-04c4-4edf-a398-0f879254007f" path="/var/lib/kubelet/pods/f84ca592-04c4-4edf-a398-0f879254007f/volumes" Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.303142 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" event={"ID":"dd5c39d7-0ae0-418d-be5c-2bfb393ce368","Type":"ContainerStarted","Data":"b277c91185b058ac777614ac48c16512d7455b3003e4952b86ac9712b98cf0fb"} Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.303444 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.303456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" event={"ID":"dd5c39d7-0ae0-418d-be5c-2bfb393ce368","Type":"ContainerStarted","Data":"9d2ba1d5c095349cc708cf9518093727976bd051b356d3250e1754406e2b5fa0"} Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.305473 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" event={"ID":"275c33c7-f8c3-46fa-9b47-eb572b561adb","Type":"ContainerStarted","Data":"4ef122a557432d01b4775c9e010c6be12fe2fc044cf6dd2a2aeb7198c01c4b7c"} Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.305500 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" event={"ID":"275c33c7-f8c3-46fa-9b47-eb572b561adb","Type":"ContainerStarted","Data":"40944b0e5f5bfa8f1153f8a77267a736e0a3ac6e2cecbfdfed6eb183841a13d7"} Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.305737 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.308075 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.320008 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" podStartSLOduration=3.319991056 podStartE2EDuration="3.319991056s" podCreationTimestamp="2025-11-28 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:47.315560487 +0000 UTC m=+326.784208396" watchObservedRunningTime="2025-11-28 20:54:47.319991056 +0000 UTC m=+326.788638955" Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.350514 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" podStartSLOduration=3.350496637 podStartE2EDuration="3.350496637s" podCreationTimestamp="2025-11-28 20:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:47.349435449 +0000 UTC m=+326.818083358" watchObservedRunningTime="2025-11-28 20:54:47.350496637 +0000 UTC m=+326.819144566" Nov 28 20:54:47 crc kubenswrapper[4957]: I1128 20:54:47.384547 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.392274 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gdnl"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.393191 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gdnl" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="registry-server" containerID="cri-o://b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d" gracePeriod=30 Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.408349 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-knngc"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.408700 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-knngc" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="registry-server" containerID="cri-o://f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838" gracePeriod=30 Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.412283 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bm8t5"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.412531 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" podUID="ecb3c993-1aab-4223-9efb-363b35b45e24" containerName="marketplace-operator" containerID="cri-o://ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6" gracePeriod=30 Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.435551 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rv2ws"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.436771 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.442482 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzzwl"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.442713 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fzzwl" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="registry-server" containerID="cri-o://cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08" gracePeriod=30 Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.445774 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sn2tq"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.445935 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sn2tq" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="registry-server" containerID="cri-o://79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc" gracePeriod=30 Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.486559 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rv2ws"] Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.549088 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/01c31d76-bda9-44e6-b62a-04a154eeae84-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.549188 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbw74\" (UniqueName: \"kubernetes.io/projected/01c31d76-bda9-44e6-b62a-04a154eeae84-kube-api-access-gbw74\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.549406 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01c31d76-bda9-44e6-b62a-04a154eeae84-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.651779 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbw74\" (UniqueName: \"kubernetes.io/projected/01c31d76-bda9-44e6-b62a-04a154eeae84-kube-api-access-gbw74\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.651854 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01c31d76-bda9-44e6-b62a-04a154eeae84-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.651882 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/01c31d76-bda9-44e6-b62a-04a154eeae84-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.653313 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01c31d76-bda9-44e6-b62a-04a154eeae84-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.657657 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/01c31d76-bda9-44e6-b62a-04a154eeae84-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.671170 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbw74\" (UniqueName: \"kubernetes.io/projected/01c31d76-bda9-44e6-b62a-04a154eeae84-kube-api-access-gbw74\") pod \"marketplace-operator-79b997595-rv2ws\" (UID: \"01c31d76-bda9-44e6-b62a-04a154eeae84\") " pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.763957 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.820655 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.889956 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.909947 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.918681 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.925262 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.962722 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmpsz\" (UniqueName: \"kubernetes.io/projected/ecb3c993-1aab-4223-9efb-363b35b45e24-kube-api-access-cmpsz\") pod \"ecb3c993-1aab-4223-9efb-363b35b45e24\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.962786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-operator-metrics\") pod \"ecb3c993-1aab-4223-9efb-363b35b45e24\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.962901 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-trusted-ca\") pod \"ecb3c993-1aab-4223-9efb-363b35b45e24\" (UID: \"ecb3c993-1aab-4223-9efb-363b35b45e24\") " Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.967377 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ecb3c993-1aab-4223-9efb-363b35b45e24" (UID: "ecb3c993-1aab-4223-9efb-363b35b45e24"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.974274 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ecb3c993-1aab-4223-9efb-363b35b45e24" (UID: "ecb3c993-1aab-4223-9efb-363b35b45e24"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:54:49 crc kubenswrapper[4957]: I1128 20:54:49.974968 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb3c993-1aab-4223-9efb-363b35b45e24-kube-api-access-cmpsz" (OuterVolumeSpecName: "kube-api-access-cmpsz") pod "ecb3c993-1aab-4223-9efb-363b35b45e24" (UID: "ecb3c993-1aab-4223-9efb-363b35b45e24"). InnerVolumeSpecName "kube-api-access-cmpsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063694 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-catalog-content\") pod \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063755 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-utilities\") pod \"c093d27c-da80-4125-93fa-47a03d1082c5\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063783 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-catalog-content\") pod \"2a6ea13b-5dba-46d9-a947-3a08d376c195\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063810 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plnfs\" (UniqueName: \"kubernetes.io/projected/c093d27c-da80-4125-93fa-47a03d1082c5-kube-api-access-plnfs\") pod \"c093d27c-da80-4125-93fa-47a03d1082c5\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063842 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-utilities\") pod \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063861 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmhwg\" (UniqueName: \"kubernetes.io/projected/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-kube-api-access-hmhwg\") pod \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063903 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl9cc\" (UniqueName: \"kubernetes.io/projected/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-kube-api-access-xl9cc\") pod \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\" (UID: \"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063928 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-catalog-content\") pod \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063946 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-utilities\") pod \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\" (UID: \"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063963 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-utilities\") pod \"2a6ea13b-5dba-46d9-a947-3a08d376c195\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063981 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r28fh\" (UniqueName: \"kubernetes.io/projected/2a6ea13b-5dba-46d9-a947-3a08d376c195-kube-api-access-r28fh\") pod \"2a6ea13b-5dba-46d9-a947-3a08d376c195\" (UID: \"2a6ea13b-5dba-46d9-a947-3a08d376c195\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.063997 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-catalog-content\") pod \"c093d27c-da80-4125-93fa-47a03d1082c5\" (UID: \"c093d27c-da80-4125-93fa-47a03d1082c5\") " Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.064181 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.064191 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmpsz\" (UniqueName: \"kubernetes.io/projected/ecb3c993-1aab-4223-9efb-363b35b45e24-kube-api-access-cmpsz\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.064200 4957 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ecb3c993-1aab-4223-9efb-363b35b45e24-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.065051 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-utilities" (OuterVolumeSpecName: "utilities") pod "2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" (UID: "2b119a86-1fc6-45aa-8b80-3abfc5a36c7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.065217 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-utilities" (OuterVolumeSpecName: "utilities") pod "c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" (UID: "c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.065237 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-utilities" (OuterVolumeSpecName: "utilities") pod "c093d27c-da80-4125-93fa-47a03d1082c5" (UID: "c093d27c-da80-4125-93fa-47a03d1082c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.065352 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-utilities" (OuterVolumeSpecName: "utilities") pod "2a6ea13b-5dba-46d9-a947-3a08d376c195" (UID: "2a6ea13b-5dba-46d9-a947-3a08d376c195"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.066860 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c093d27c-da80-4125-93fa-47a03d1082c5-kube-api-access-plnfs" (OuterVolumeSpecName: "kube-api-access-plnfs") pod "c093d27c-da80-4125-93fa-47a03d1082c5" (UID: "c093d27c-da80-4125-93fa-47a03d1082c5"). InnerVolumeSpecName "kube-api-access-plnfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.067598 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-kube-api-access-xl9cc" (OuterVolumeSpecName: "kube-api-access-xl9cc") pod "c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" (UID: "c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7"). InnerVolumeSpecName "kube-api-access-xl9cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.067936 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6ea13b-5dba-46d9-a947-3a08d376c195-kube-api-access-r28fh" (OuterVolumeSpecName: "kube-api-access-r28fh") pod "2a6ea13b-5dba-46d9-a947-3a08d376c195" (UID: "2a6ea13b-5dba-46d9-a947-3a08d376c195"). InnerVolumeSpecName "kube-api-access-r28fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.068628 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-kube-api-access-hmhwg" (OuterVolumeSpecName: "kube-api-access-hmhwg") pod "2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" (UID: "2b119a86-1fc6-45aa-8b80-3abfc5a36c7c"). InnerVolumeSpecName "kube-api-access-hmhwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.085736 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" (UID: "2b119a86-1fc6-45aa-8b80-3abfc5a36c7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.120084 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c093d27c-da80-4125-93fa-47a03d1082c5" (UID: "c093d27c-da80-4125-93fa-47a03d1082c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.123045 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a6ea13b-5dba-46d9-a947-3a08d376c195" (UID: "2a6ea13b-5dba-46d9-a947-3a08d376c195"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165265 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165313 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165325 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165340 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r28fh\" (UniqueName: \"kubernetes.io/projected/2a6ea13b-5dba-46d9-a947-3a08d376c195-kube-api-access-r28fh\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165356 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165365 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c093d27c-da80-4125-93fa-47a03d1082c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165373 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ea13b-5dba-46d9-a947-3a08d376c195-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165381 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plnfs\" (UniqueName: \"kubernetes.io/projected/c093d27c-da80-4125-93fa-47a03d1082c5-kube-api-access-plnfs\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165389 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165398 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmhwg\" (UniqueName: \"kubernetes.io/projected/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c-kube-api-access-hmhwg\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.165407 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl9cc\" (UniqueName: \"kubernetes.io/projected/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-kube-api-access-xl9cc\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.185132 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" (UID: "c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.227651 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rv2ws"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.266114 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.319122 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" event={"ID":"01c31d76-bda9-44e6-b62a-04a154eeae84","Type":"ContainerStarted","Data":"f00822f7b1e819d9ccacb9e814fe24285e70790be5e43e5d41d07a76887d693d"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.321116 4957 generic.go:334] "Generic (PLEG): container finished" podID="c093d27c-da80-4125-93fa-47a03d1082c5" containerID="b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d" exitCode=0 Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.321160 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gdnl" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.321165 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdnl" event={"ID":"c093d27c-da80-4125-93fa-47a03d1082c5","Type":"ContainerDied","Data":"b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.321182 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gdnl" event={"ID":"c093d27c-da80-4125-93fa-47a03d1082c5","Type":"ContainerDied","Data":"11f6e836154519e37885ffbd983752ae5543eb806b1f5ea62996aa9457e31ac9"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.321197 4957 scope.go:117] "RemoveContainer" containerID="b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.323640 4957 generic.go:334] "Generic (PLEG): container finished" podID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerID="f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838" exitCode=0 Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.323691 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knngc" event={"ID":"2a6ea13b-5dba-46d9-a947-3a08d376c195","Type":"ContainerDied","Data":"f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.323880 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knngc" event={"ID":"2a6ea13b-5dba-46d9-a947-3a08d376c195","Type":"ContainerDied","Data":"8263108fbd90972c219f02aea36eb68348dd73a3fc99e99382f0267bce94e23d"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.323709 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knngc" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.325700 4957 generic.go:334] "Generic (PLEG): container finished" podID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerID="cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08" exitCode=0 Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.325741 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzzwl" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.325757 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzzwl" event={"ID":"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c","Type":"ContainerDied","Data":"cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.325775 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzzwl" event={"ID":"2b119a86-1fc6-45aa-8b80-3abfc5a36c7c","Type":"ContainerDied","Data":"84a19931b4069045db1475c66007f69d560c5245ad856c951c0e829b3217c984"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.327661 4957 generic.go:334] "Generic (PLEG): container finished" podID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerID="79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc" exitCode=0 Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.327748 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sn2tq" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.327766 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sn2tq" event={"ID":"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7","Type":"ContainerDied","Data":"79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.327800 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sn2tq" event={"ID":"c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7","Type":"ContainerDied","Data":"7877f6b24b8b657ae3663d25b37cddb9e5c7a3d7b7c5162c7edcb10f25a857f2"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.329630 4957 generic.go:334] "Generic (PLEG): container finished" podID="ecb3c993-1aab-4223-9efb-363b35b45e24" containerID="ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6" exitCode=0 Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.329654 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" event={"ID":"ecb3c993-1aab-4223-9efb-363b35b45e24","Type":"ContainerDied","Data":"ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.329669 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" event={"ID":"ecb3c993-1aab-4223-9efb-363b35b45e24","Type":"ContainerDied","Data":"bf33a029d95bf0653ce41ff8bd9134cf30cf88ebd559b611654529bd2f29e759"} Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.329715 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bm8t5" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.335683 4957 scope.go:117] "RemoveContainer" containerID="47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.354166 4957 scope.go:117] "RemoveContainer" containerID="85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.375499 4957 scope.go:117] "RemoveContainer" containerID="b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.381319 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d\": container with ID starting with b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d not found: ID does not exist" containerID="b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.381465 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d"} err="failed to get container status \"b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d\": rpc error: code = NotFound desc = could not find container \"b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d\": container with ID starting with b76d26e7a77ebfcaaa6fc06251ad706e3b4c800535897fb68df44937f63a7e1d not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.381552 4957 scope.go:117] "RemoveContainer" containerID="47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.382325 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf\": container with ID starting with 47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf not found: ID does not exist" containerID="47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.382346 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf"} err="failed to get container status \"47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf\": rpc error: code = NotFound desc = could not find container \"47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf\": container with ID starting with 47db36371f1faf0e6eda94c8c8725632d281c6dc00cd24f4d2d1bb782b299bdf not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.382359 4957 scope.go:117] "RemoveContainer" containerID="85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.383191 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b\": container with ID starting with 85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b not found: ID does not exist" containerID="85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.383251 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b"} err="failed to get container status \"85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b\": rpc error: code = NotFound desc = could not find container \"85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b\": container with ID starting with 85da24ec4313aad4c0630d357baaa4ebfba5a0aeb5e954a9f4c4b3153e47a41b not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.383278 4957 scope.go:117] "RemoveContainer" containerID="f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.400144 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gdnl"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.400586 4957 scope.go:117] "RemoveContainer" containerID="b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.404095 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gdnl"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.420753 4957 scope.go:117] "RemoveContainer" containerID="9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.422893 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sn2tq"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.428429 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sn2tq"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.437886 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bm8t5"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.440337 4957 scope.go:117] "RemoveContainer" containerID="f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.440780 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838\": container with ID starting with f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838 not found: ID does not exist" containerID="f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.440876 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838"} err="failed to get container status \"f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838\": rpc error: code = NotFound desc = could not find container \"f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838\": container with ID starting with f75fcdd586e2147e7023bd1ef23a1728a7354282e7fc981b6c06b6ce0d85c838 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.440972 4957 scope.go:117] "RemoveContainer" containerID="b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.441273 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bm8t5"] Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.441414 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec\": container with ID starting with b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec not found: ID does not exist" containerID="b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.441498 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec"} err="failed to get container status \"b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec\": rpc error: code = NotFound desc = could not find container \"b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec\": container with ID starting with b49748ec3c625d7b9c64d86bf8d559b7dd21bd2446aeb710fa16954f0ae199ec not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.441574 4957 scope.go:117] "RemoveContainer" containerID="9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.442092 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3\": container with ID starting with 9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3 not found: ID does not exist" containerID="9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.442189 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3"} err="failed to get container status \"9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3\": rpc error: code = NotFound desc = could not find container \"9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3\": container with ID starting with 9e03ced5a9b777c19d91eeb4d47b6623650d852a345cd66dae6aed4d231bc8a3 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.442351 4957 scope.go:117] "RemoveContainer" containerID="cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.444280 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-knngc"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.446999 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-knngc"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.450172 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzzwl"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.453831 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzzwl"] Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.454159 4957 scope.go:117] "RemoveContainer" containerID="4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.469939 4957 scope.go:117] "RemoveContainer" containerID="955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.488535 4957 scope.go:117] "RemoveContainer" containerID="cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.488935 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08\": container with ID starting with cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08 not found: ID does not exist" containerID="cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.488964 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08"} err="failed to get container status \"cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08\": rpc error: code = NotFound desc = could not find container \"cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08\": container with ID starting with cfd9704a0f9f86673c769d27f97c58c9438cd5a053995ba2295cd8bf4d421c08 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.488989 4957 scope.go:117] "RemoveContainer" containerID="4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.489364 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2\": container with ID starting with 4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2 not found: ID does not exist" containerID="4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.489386 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2"} err="failed to get container status \"4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2\": rpc error: code = NotFound desc = could not find container \"4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2\": container with ID starting with 4c0f4dbe2abb03f703b6b139f3749c11398d99f702ce894bedff8128ebf72ef2 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.489399 4957 scope.go:117] "RemoveContainer" containerID="955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.489913 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df\": container with ID starting with 955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df not found: ID does not exist" containerID="955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.490002 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df"} err="failed to get container status \"955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df\": rpc error: code = NotFound desc = could not find container \"955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df\": container with ID starting with 955e62e86801a8fba309a51e9a1156d30f16071fae24ad2ebb0ee69aeeee96df not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.490082 4957 scope.go:117] "RemoveContainer" containerID="79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.504818 4957 scope.go:117] "RemoveContainer" containerID="89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.520935 4957 scope.go:117] "RemoveContainer" containerID="4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.536631 4957 scope.go:117] "RemoveContainer" containerID="79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.537306 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc\": container with ID starting with 79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc not found: ID does not exist" containerID="79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.537344 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc"} err="failed to get container status \"79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc\": rpc error: code = NotFound desc = could not find container \"79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc\": container with ID starting with 79f34a1ab1d3acadeb1acf1252aa4bcb1dd11aae29509ce0cda1efbedabf50bc not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.537379 4957 scope.go:117] "RemoveContainer" containerID="89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.538296 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2\": container with ID starting with 89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2 not found: ID does not exist" containerID="89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.538337 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2"} err="failed to get container status \"89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2\": rpc error: code = NotFound desc = could not find container \"89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2\": container with ID starting with 89acc2dbb7e9d984e9c512983653211a07ab40d28eb05fb54d2ed9ce6949afd2 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.538365 4957 scope.go:117] "RemoveContainer" containerID="4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.538804 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35\": container with ID starting with 4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35 not found: ID does not exist" containerID="4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.538843 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35"} err="failed to get container status \"4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35\": rpc error: code = NotFound desc = could not find container \"4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35\": container with ID starting with 4a92a7d80a42e0099e2f2b522c059f749483bc57c6a49b65c4e564a652b98a35 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.538857 4957 scope.go:117] "RemoveContainer" containerID="ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.586439 4957 scope.go:117] "RemoveContainer" containerID="ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6" Nov 28 20:54:50 crc kubenswrapper[4957]: E1128 20:54:50.586987 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6\": container with ID starting with ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6 not found: ID does not exist" containerID="ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.587016 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6"} err="failed to get container status \"ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6\": rpc error: code = NotFound desc = could not find container \"ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6\": container with ID starting with ddb18620edb436cba09bc7120616d4f8920c188e1c59711d1e1a5fb190a072a6 not found: ID does not exist" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.819468 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" path="/var/lib/kubelet/pods/2a6ea13b-5dba-46d9-a947-3a08d376c195/volumes" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.820022 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" path="/var/lib/kubelet/pods/2b119a86-1fc6-45aa-8b80-3abfc5a36c7c/volumes" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.820573 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" path="/var/lib/kubelet/pods/c093d27c-da80-4125-93fa-47a03d1082c5/volumes" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.821559 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" path="/var/lib/kubelet/pods/c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7/volumes" Nov 28 20:54:50 crc kubenswrapper[4957]: I1128 20:54:50.822127 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb3c993-1aab-4223-9efb-363b35b45e24" path="/var/lib/kubelet/pods/ecb3c993-1aab-4223-9efb-363b35b45e24/volumes" Nov 28 20:54:51 crc kubenswrapper[4957]: I1128 20:54:51.337288 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" event={"ID":"01c31d76-bda9-44e6-b62a-04a154eeae84","Type":"ContainerStarted","Data":"be63c6faaebc8f0092eb7c1f8150d4036a08e0534437184f2a0937b62a1b7593"} Nov 28 20:54:51 crc kubenswrapper[4957]: I1128 20:54:51.337634 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:51 crc kubenswrapper[4957]: I1128 20:54:51.340543 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" Nov 28 20:54:51 crc kubenswrapper[4957]: I1128 20:54:51.350239 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rv2ws" podStartSLOduration=2.350221255 podStartE2EDuration="2.350221255s" podCreationTimestamp="2025-11-28 20:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:51.349737113 +0000 UTC m=+330.818385042" watchObservedRunningTime="2025-11-28 20:54:51.350221255 +0000 UTC m=+330.818869154" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.138250 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk"] Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.138517 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" podUID="dd5c39d7-0ae0-418d-be5c-2bfb393ce368" containerName="controller-manager" containerID="cri-o://b277c91185b058ac777614ac48c16512d7455b3003e4952b86ac9712b98cf0fb" gracePeriod=30 Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.163905 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf"] Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.164134 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" podUID="275c33c7-f8c3-46fa-9b47-eb572b561adb" containerName="route-controller-manager" containerID="cri-o://4ef122a557432d01b4775c9e010c6be12fe2fc044cf6dd2a2aeb7198c01c4b7c" gracePeriod=30 Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.353724 4957 generic.go:334] "Generic (PLEG): container finished" podID="dd5c39d7-0ae0-418d-be5c-2bfb393ce368" containerID="b277c91185b058ac777614ac48c16512d7455b3003e4952b86ac9712b98cf0fb" exitCode=0 Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.353845 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" event={"ID":"dd5c39d7-0ae0-418d-be5c-2bfb393ce368","Type":"ContainerDied","Data":"b277c91185b058ac777614ac48c16512d7455b3003e4952b86ac9712b98cf0fb"} Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.355728 4957 generic.go:334] "Generic (PLEG): container finished" podID="275c33c7-f8c3-46fa-9b47-eb572b561adb" containerID="4ef122a557432d01b4775c9e010c6be12fe2fc044cf6dd2a2aeb7198c01c4b7c" exitCode=0 Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.355815 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" event={"ID":"275c33c7-f8c3-46fa-9b47-eb572b561adb","Type":"ContainerDied","Data":"4ef122a557432d01b4775c9e010c6be12fe2fc044cf6dd2a2aeb7198c01c4b7c"} Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.636814 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.642560 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804089 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/275c33c7-f8c3-46fa-9b47-eb572b561adb-serving-cert\") pod \"275c33c7-f8c3-46fa-9b47-eb572b561adb\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804151 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6mw5\" (UniqueName: \"kubernetes.io/projected/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-kube-api-access-x6mw5\") pod \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804180 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-serving-cert\") pod \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804219 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-config\") pod \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804253 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-client-ca\") pod \"275c33c7-f8c3-46fa-9b47-eb572b561adb\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804276 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-proxy-ca-bundles\") pod \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804318 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl95s\" (UniqueName: \"kubernetes.io/projected/275c33c7-f8c3-46fa-9b47-eb572b561adb-kube-api-access-tl95s\") pod \"275c33c7-f8c3-46fa-9b47-eb572b561adb\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-config\") pod \"275c33c7-f8c3-46fa-9b47-eb572b561adb\" (UID: \"275c33c7-f8c3-46fa-9b47-eb572b561adb\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804387 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-client-ca\") pod \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\" (UID: \"dd5c39d7-0ae0-418d-be5c-2bfb393ce368\") " Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804913 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd5c39d7-0ae0-418d-be5c-2bfb393ce368" (UID: "dd5c39d7-0ae0-418d-be5c-2bfb393ce368"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.804942 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-config" (OuterVolumeSpecName: "config") pod "dd5c39d7-0ae0-418d-be5c-2bfb393ce368" (UID: "dd5c39d7-0ae0-418d-be5c-2bfb393ce368"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.805365 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-client-ca" (OuterVolumeSpecName: "client-ca") pod "275c33c7-f8c3-46fa-9b47-eb572b561adb" (UID: "275c33c7-f8c3-46fa-9b47-eb572b561adb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.805464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd5c39d7-0ae0-418d-be5c-2bfb393ce368" (UID: "dd5c39d7-0ae0-418d-be5c-2bfb393ce368"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.805552 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-config" (OuterVolumeSpecName: "config") pod "275c33c7-f8c3-46fa-9b47-eb572b561adb" (UID: "275c33c7-f8c3-46fa-9b47-eb572b561adb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.809778 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-kube-api-access-x6mw5" (OuterVolumeSpecName: "kube-api-access-x6mw5") pod "dd5c39d7-0ae0-418d-be5c-2bfb393ce368" (UID: "dd5c39d7-0ae0-418d-be5c-2bfb393ce368"). InnerVolumeSpecName "kube-api-access-x6mw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.810272 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275c33c7-f8c3-46fa-9b47-eb572b561adb-kube-api-access-tl95s" (OuterVolumeSpecName: "kube-api-access-tl95s") pod "275c33c7-f8c3-46fa-9b47-eb572b561adb" (UID: "275c33c7-f8c3-46fa-9b47-eb572b561adb"). InnerVolumeSpecName "kube-api-access-tl95s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.810372 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275c33c7-f8c3-46fa-9b47-eb572b561adb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "275c33c7-f8c3-46fa-9b47-eb572b561adb" (UID: "275c33c7-f8c3-46fa-9b47-eb572b561adb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.811766 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd5c39d7-0ae0-418d-be5c-2bfb393ce368" (UID: "dd5c39d7-0ae0-418d-be5c-2bfb393ce368"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906129 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl95s\" (UniqueName: \"kubernetes.io/projected/275c33c7-f8c3-46fa-9b47-eb572b561adb-kube-api-access-tl95s\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906502 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906514 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906530 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/275c33c7-f8c3-46fa-9b47-eb572b561adb-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906542 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6mw5\" (UniqueName: \"kubernetes.io/projected/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-kube-api-access-x6mw5\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906553 4957 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906564 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906572 4957 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/275c33c7-f8c3-46fa-9b47-eb572b561adb-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:53 crc kubenswrapper[4957]: I1128 20:54:53.906580 4957 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd5c39d7-0ae0-418d-be5c-2bfb393ce368-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.362504 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" event={"ID":"dd5c39d7-0ae0-418d-be5c-2bfb393ce368","Type":"ContainerDied","Data":"9d2ba1d5c095349cc708cf9518093727976bd051b356d3250e1754406e2b5fa0"} Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.362552 4957 scope.go:117] "RemoveContainer" containerID="b277c91185b058ac777614ac48c16512d7455b3003e4952b86ac9712b98cf0fb" Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.362575 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk" Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.364474 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" event={"ID":"275c33c7-f8c3-46fa-9b47-eb572b561adb","Type":"ContainerDied","Data":"40944b0e5f5bfa8f1153f8a77267a736e0a3ac6e2cecbfdfed6eb183841a13d7"} Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.364551 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf" Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.383428 4957 scope.go:117] "RemoveContainer" containerID="4ef122a557432d01b4775c9e010c6be12fe2fc044cf6dd2a2aeb7198c01c4b7c" Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.405003 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk"] Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.410325 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7766fdf5d9-wk6dk"] Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.414663 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf"] Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.417151 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ccb45c985-wj9zf"] Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.818934 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275c33c7-f8c3-46fa-9b47-eb572b561adb" path="/var/lib/kubelet/pods/275c33c7-f8c3-46fa-9b47-eb572b561adb/volumes" Nov 28 20:54:54 crc kubenswrapper[4957]: I1128 20:54:54.819597 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5c39d7-0ae0-418d-be5c-2bfb393ce368" path="/var/lib/kubelet/pods/dd5c39d7-0ae0-418d-be5c-2bfb393ce368/volumes" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.112486 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b68476d44-mpg2k"] Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116665 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116693 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116738 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb3c993-1aab-4223-9efb-363b35b45e24" containerName="marketplace-operator" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116746 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb3c993-1aab-4223-9efb-363b35b45e24" containerName="marketplace-operator" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116756 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116764 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116780 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116787 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116797 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116804 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116819 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116826 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116844 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116851 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116862 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275c33c7-f8c3-46fa-9b47-eb572b561adb" containerName="route-controller-manager" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116870 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="275c33c7-f8c3-46fa-9b47-eb572b561adb" containerName="route-controller-manager" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116887 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116895 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116915 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116922 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="extract-utilities" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116932 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116942 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116957 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116964 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116976 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.116983 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="extract-content" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.116998 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.117005 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: E1128 20:54:55.117021 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5c39d7-0ae0-418d-be5c-2bfb393ce368" containerName="controller-manager" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.117029 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5c39d7-0ae0-418d-be5c-2bfb393ce368" containerName="controller-manager" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.117232 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5c39d7-0ae0-418d-be5c-2bfb393ce368" containerName="controller-manager" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.117252 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="275c33c7-f8c3-46fa-9b47-eb572b561adb" containerName="route-controller-manager" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.118024 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ed0d28-6db1-4f2a-87a6-07ffcc9d6ea7" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.118080 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c093d27c-da80-4125-93fa-47a03d1082c5" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.118108 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b119a86-1fc6-45aa-8b80-3abfc5a36c7c" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.118128 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6ea13b-5dba-46d9-a947-3a08d376c195" containerName="registry-server" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.118175 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb3c993-1aab-4223-9efb-363b35b45e24" containerName="marketplace-operator" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.120083 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn"] Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.120534 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.121861 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.122907 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn"] Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.125271 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.126181 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.126478 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.126592 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.127020 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.127148 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.127022 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.127938 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.128311 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.129085 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.129535 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b68476d44-mpg2k"] Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.129590 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.134460 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.134837 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.220422 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-config\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.220579 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5db89b42-3276-4911-aab9-be26d09fb959-serving-cert\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.220609 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-proxy-ca-bundles\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.220676 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-client-ca\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.220702 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2c48\" (UniqueName: \"kubernetes.io/projected/5db89b42-3276-4911-aab9-be26d09fb959-kube-api-access-l2c48\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-client-ca\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321723 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2c48\" (UniqueName: \"kubernetes.io/projected/5db89b42-3276-4911-aab9-be26d09fb959-kube-api-access-l2c48\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321754 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-config\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b97b186-3284-4381-8a33-523760ff550e-client-ca\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321841 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5db89b42-3276-4911-aab9-be26d09fb959-serving-cert\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-proxy-ca-bundles\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b97b186-3284-4381-8a33-523760ff550e-serving-cert\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321908 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b97b186-3284-4381-8a33-523760ff550e-config\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.321942 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwj5s\" (UniqueName: \"kubernetes.io/projected/8b97b186-3284-4381-8a33-523760ff550e-kube-api-access-bwj5s\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.322784 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-client-ca\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.323067 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-proxy-ca-bundles\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.323510 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db89b42-3276-4911-aab9-be26d09fb959-config\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.327630 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5db89b42-3276-4911-aab9-be26d09fb959-serving-cert\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.338135 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2c48\" (UniqueName: \"kubernetes.io/projected/5db89b42-3276-4911-aab9-be26d09fb959-kube-api-access-l2c48\") pod \"controller-manager-b68476d44-mpg2k\" (UID: \"5db89b42-3276-4911-aab9-be26d09fb959\") " pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.423485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b97b186-3284-4381-8a33-523760ff550e-serving-cert\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.423570 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b97b186-3284-4381-8a33-523760ff550e-config\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.423644 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwj5s\" (UniqueName: \"kubernetes.io/projected/8b97b186-3284-4381-8a33-523760ff550e-kube-api-access-bwj5s\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.423709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b97b186-3284-4381-8a33-523760ff550e-client-ca\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.424899 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b97b186-3284-4381-8a33-523760ff550e-config\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.425114 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b97b186-3284-4381-8a33-523760ff550e-client-ca\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.427531 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b97b186-3284-4381-8a33-523760ff550e-serving-cert\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.442825 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwj5s\" (UniqueName: \"kubernetes.io/projected/8b97b186-3284-4381-8a33-523760ff550e-kube-api-access-bwj5s\") pod \"route-controller-manager-69bd97b9cb-4shbn\" (UID: \"8b97b186-3284-4381-8a33-523760ff550e\") " pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.502222 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.510472 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.703095 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn"] Nov 28 20:54:55 crc kubenswrapper[4957]: W1128 20:54:55.709980 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b97b186_3284_4381_8a33_523760ff550e.slice/crio-6218942a10422cc851349ee39bda7e7a2dfa2d9faf37ba877dd6467aaec5faf7 WatchSource:0}: Error finding container 6218942a10422cc851349ee39bda7e7a2dfa2d9faf37ba877dd6467aaec5faf7: Status 404 returned error can't find the container with id 6218942a10422cc851349ee39bda7e7a2dfa2d9faf37ba877dd6467aaec5faf7 Nov 28 20:54:55 crc kubenswrapper[4957]: I1128 20:54:55.750527 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b68476d44-mpg2k"] Nov 28 20:54:55 crc kubenswrapper[4957]: W1128 20:54:55.754492 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db89b42_3276_4911_aab9_be26d09fb959.slice/crio-c700a0ab4f162ef8ca542e1d9c27563434bf9bc363c341f2a0a84e729d1c6dea WatchSource:0}: Error finding container c700a0ab4f162ef8ca542e1d9c27563434bf9bc363c341f2a0a84e729d1c6dea: Status 404 returned error can't find the container with id c700a0ab4f162ef8ca542e1d9c27563434bf9bc363c341f2a0a84e729d1c6dea Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.376660 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" event={"ID":"5db89b42-3276-4911-aab9-be26d09fb959","Type":"ContainerStarted","Data":"64411f6e66b13fee8498ab15a1743c2c45e38dca77b23f3bfb1d7fc4062a71d9"} Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.377023 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" event={"ID":"5db89b42-3276-4911-aab9-be26d09fb959","Type":"ContainerStarted","Data":"c700a0ab4f162ef8ca542e1d9c27563434bf9bc363c341f2a0a84e729d1c6dea"} Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.377064 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.378584 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" event={"ID":"8b97b186-3284-4381-8a33-523760ff550e","Type":"ContainerStarted","Data":"40fa52befdffde40dceb12b66f22f9963f4d52dee7e715529e5174dc76163366"} Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.378622 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" event={"ID":"8b97b186-3284-4381-8a33-523760ff550e","Type":"ContainerStarted","Data":"6218942a10422cc851349ee39bda7e7a2dfa2d9faf37ba877dd6467aaec5faf7"} Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.378969 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.383211 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.383668 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.398906 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b68476d44-mpg2k" podStartSLOduration=3.398888842 podStartE2EDuration="3.398888842s" podCreationTimestamp="2025-11-28 20:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:56.395753762 +0000 UTC m=+335.864401671" watchObservedRunningTime="2025-11-28 20:54:56.398888842 +0000 UTC m=+335.867536751" Nov 28 20:54:56 crc kubenswrapper[4957]: I1128 20:54:56.416761 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69bd97b9cb-4shbn" podStartSLOduration=3.416742048 podStartE2EDuration="3.416742048s" podCreationTimestamp="2025-11-28 20:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:54:56.412771336 +0000 UTC m=+335.881419245" watchObservedRunningTime="2025-11-28 20:54:56.416742048 +0000 UTC m=+335.885389957" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.099634 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l"] Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.101106 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.108468 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.108499 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.108856 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.108617 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.108929 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.123990 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l"] Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.132681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgbw7\" (UniqueName: \"kubernetes.io/projected/c9547aa5-5944-4e86-970f-80a1425b826f-kube-api-access-jgbw7\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.132752 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9547aa5-5944-4e86-970f-80a1425b826f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.132784 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c9547aa5-5944-4e86-970f-80a1425b826f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.233960 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c9547aa5-5944-4e86-970f-80a1425b826f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.234066 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgbw7\" (UniqueName: \"kubernetes.io/projected/c9547aa5-5944-4e86-970f-80a1425b826f-kube-api-access-jgbw7\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.234111 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9547aa5-5944-4e86-970f-80a1425b826f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.235072 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c9547aa5-5944-4e86-970f-80a1425b826f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.241348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9547aa5-5944-4e86-970f-80a1425b826f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.249160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgbw7\" (UniqueName: \"kubernetes.io/projected/c9547aa5-5944-4e86-970f-80a1425b826f-kube-api-access-jgbw7\") pod \"cluster-monitoring-operator-6d5b84845-8zn8l\" (UID: \"c9547aa5-5944-4e86-970f-80a1425b826f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.446173 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" Nov 28 20:55:19 crc kubenswrapper[4957]: I1128 20:55:19.830332 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l"] Nov 28 20:55:20 crc kubenswrapper[4957]: I1128 20:55:20.494106 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" event={"ID":"c9547aa5-5944-4e86-970f-80a1425b826f","Type":"ContainerStarted","Data":"a0ed87c1b1553f42b0045daa87832890b626288663656d93b4fa069679869ed0"} Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.078850 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-shng4"] Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.079742 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.101642 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-shng4"] Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.180506 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2"] Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.182022 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.183568 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.184736 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-v8j5r" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.185011 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2"] Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269697 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chtll\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-kube-api-access-chtll\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269763 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269805 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-registry-certificates\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269829 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269850 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-bound-sa-token\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269888 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.269903 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-trusted-ca\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.270430 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-registry-tls\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.304193 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371653 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-registry-tls\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371706 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chtll\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-kube-api-access-chtll\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371773 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2834a538-4824-4ce9-bfa7-6f8fe364ae14-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-cm7j2\" (UID: \"2834a538-4824-4ce9-bfa7-6f8fe364ae14\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371816 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-registry-certificates\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371846 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371875 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-bound-sa-token\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.371969 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.372025 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-trusted-ca\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.372347 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.373522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-trusted-ca\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.373736 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-registry-certificates\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.384646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.384732 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-registry-tls\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.386861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chtll\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-kube-api-access-chtll\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.391910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05aeb53-bf3b-45e6-82ca-9e11beb03ec5-bound-sa-token\") pod \"image-registry-66df7c8f76-shng4\" (UID: \"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5\") " pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.473465 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2834a538-4824-4ce9-bfa7-6f8fe364ae14-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-cm7j2\" (UID: \"2834a538-4824-4ce9-bfa7-6f8fe364ae14\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.476769 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2834a538-4824-4ce9-bfa7-6f8fe364ae14-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-cm7j2\" (UID: \"2834a538-4824-4ce9-bfa7-6f8fe364ae14\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.494598 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.506819 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" event={"ID":"c9547aa5-5944-4e86-970f-80a1425b826f","Type":"ContainerStarted","Data":"b69e8da84c870297f1c04f6024ba6e06a61895cfb1ccb6ea2615af5eca103e25"} Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.523096 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8zn8l" podStartSLOduration=1.7003895629999999 podStartE2EDuration="3.523077147s" podCreationTimestamp="2025-11-28 20:55:19 +0000 UTC" firstStartedPulling="2025-11-28 20:55:19.839379464 +0000 UTC m=+359.308027363" lastFinishedPulling="2025-11-28 20:55:21.662067048 +0000 UTC m=+361.130714947" observedRunningTime="2025-11-28 20:55:22.521550728 +0000 UTC m=+361.990198637" watchObservedRunningTime="2025-11-28 20:55:22.523077147 +0000 UTC m=+361.991725056" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.690968 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:22 crc kubenswrapper[4957]: I1128 20:55:22.943017 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2"] Nov 28 20:55:22 crc kubenswrapper[4957]: W1128 20:55:22.946464 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2834a538_4824_4ce9_bfa7_6f8fe364ae14.slice/crio-234b05195db2c874124c01e9398c471eeb4ca8f824234c8bddf65fd0e58e74f0 WatchSource:0}: Error finding container 234b05195db2c874124c01e9398c471eeb4ca8f824234c8bddf65fd0e58e74f0: Status 404 returned error can't find the container with id 234b05195db2c874124c01e9398c471eeb4ca8f824234c8bddf65fd0e58e74f0 Nov 28 20:55:23 crc kubenswrapper[4957]: I1128 20:55:23.053229 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-shng4"] Nov 28 20:55:23 crc kubenswrapper[4957]: W1128 20:55:23.060853 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd05aeb53_bf3b_45e6_82ca_9e11beb03ec5.slice/crio-250fd0e983f89cf1ac0491c9b50fb50c061e304367180c9b4d2a637234a78c42 WatchSource:0}: Error finding container 250fd0e983f89cf1ac0491c9b50fb50c061e304367180c9b4d2a637234a78c42: Status 404 returned error can't find the container with id 250fd0e983f89cf1ac0491c9b50fb50c061e304367180c9b4d2a637234a78c42 Nov 28 20:55:23 crc kubenswrapper[4957]: I1128 20:55:23.513763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" event={"ID":"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5","Type":"ContainerStarted","Data":"d47b2f0173b7f1f4e195d02ed2da5072b28c51b271ec79ce3e238f722f8dae65"} Nov 28 20:55:23 crc kubenswrapper[4957]: I1128 20:55:23.514113 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:23 crc kubenswrapper[4957]: I1128 20:55:23.514125 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" event={"ID":"d05aeb53-bf3b-45e6-82ca-9e11beb03ec5","Type":"ContainerStarted","Data":"250fd0e983f89cf1ac0491c9b50fb50c061e304367180c9b4d2a637234a78c42"} Nov 28 20:55:23 crc kubenswrapper[4957]: I1128 20:55:23.515259 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" event={"ID":"2834a538-4824-4ce9-bfa7-6f8fe364ae14","Type":"ContainerStarted","Data":"234b05195db2c874124c01e9398c471eeb4ca8f824234c8bddf65fd0e58e74f0"} Nov 28 20:55:23 crc kubenswrapper[4957]: I1128 20:55:23.533121 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" podStartSLOduration=1.533097486 podStartE2EDuration="1.533097486s" podCreationTimestamp="2025-11-28 20:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:55:23.528740865 +0000 UTC m=+362.997388794" watchObservedRunningTime="2025-11-28 20:55:23.533097486 +0000 UTC m=+363.001745395" Nov 28 20:55:24 crc kubenswrapper[4957]: I1128 20:55:24.521255 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" event={"ID":"2834a538-4824-4ce9-bfa7-6f8fe364ae14","Type":"ContainerStarted","Data":"397bc5b8b480f761de015da67466078016554d731e55d19181a5d358940f5953"} Nov 28 20:55:24 crc kubenswrapper[4957]: I1128 20:55:24.521713 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:24 crc kubenswrapper[4957]: I1128 20:55:24.529839 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" Nov 28 20:55:24 crc kubenswrapper[4957]: I1128 20:55:24.538150 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-cm7j2" podStartSLOduration=1.280279408 podStartE2EDuration="2.538131357s" podCreationTimestamp="2025-11-28 20:55:22 +0000 UTC" firstStartedPulling="2025-11-28 20:55:22.948322112 +0000 UTC m=+362.416970021" lastFinishedPulling="2025-11-28 20:55:24.206174061 +0000 UTC m=+363.674821970" observedRunningTime="2025-11-28 20:55:24.536019493 +0000 UTC m=+364.004667402" watchObservedRunningTime="2025-11-28 20:55:24.538131357 +0000 UTC m=+364.006779266" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.248765 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-sq9jn"] Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.249564 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.259856 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-sq9jn"] Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.259985 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.308547 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.308550 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.310606 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6rkwq" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.410640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f5024b46-ad67-4d79-a18b-208b9346ef04-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.410984 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5024b46-ad67-4d79-a18b-208b9346ef04-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.411017 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7q2\" (UniqueName: \"kubernetes.io/projected/f5024b46-ad67-4d79-a18b-208b9346ef04-kube-api-access-2z7q2\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.411036 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5024b46-ad67-4d79-a18b-208b9346ef04-metrics-client-ca\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.511827 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5024b46-ad67-4d79-a18b-208b9346ef04-metrics-client-ca\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.511928 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f5024b46-ad67-4d79-a18b-208b9346ef04-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.511948 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5024b46-ad67-4d79-a18b-208b9346ef04-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.511972 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z7q2\" (UniqueName: \"kubernetes.io/projected/f5024b46-ad67-4d79-a18b-208b9346ef04-kube-api-access-2z7q2\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.512952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5024b46-ad67-4d79-a18b-208b9346ef04-metrics-client-ca\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.518944 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f5024b46-ad67-4d79-a18b-208b9346ef04-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.518960 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5024b46-ad67-4d79-a18b-208b9346ef04-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.528585 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z7q2\" (UniqueName: \"kubernetes.io/projected/f5024b46-ad67-4d79-a18b-208b9346ef04-kube-api-access-2z7q2\") pod \"prometheus-operator-db54df47d-sq9jn\" (UID: \"f5024b46-ad67-4d79-a18b-208b9346ef04\") " pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.624933 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" Nov 28 20:55:25 crc kubenswrapper[4957]: I1128 20:55:25.996090 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-sq9jn"] Nov 28 20:55:26 crc kubenswrapper[4957]: I1128 20:55:26.534509 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" event={"ID":"f5024b46-ad67-4d79-a18b-208b9346ef04","Type":"ContainerStarted","Data":"6868a6e12d2f7885aba0acc799a3caf30273bd675ca58b80b84c7f67e50c07e7"} Nov 28 20:55:28 crc kubenswrapper[4957]: I1128 20:55:28.546480 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" event={"ID":"f5024b46-ad67-4d79-a18b-208b9346ef04","Type":"ContainerStarted","Data":"c3169bf37be171636651823bf3b80b7709c9f8d40cc0e5e2b465d8c6ea716c17"} Nov 28 20:55:28 crc kubenswrapper[4957]: I1128 20:55:28.546808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" event={"ID":"f5024b46-ad67-4d79-a18b-208b9346ef04","Type":"ContainerStarted","Data":"e2b15ed5189e04f0efad8b92f473d979605b22b6da502b4b81f6fae8e0e1cd9c"} Nov 28 20:55:28 crc kubenswrapper[4957]: I1128 20:55:28.574657 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-sq9jn" podStartSLOduration=2.130923642 podStartE2EDuration="3.574637962s" podCreationTimestamp="2025-11-28 20:55:25 +0000 UTC" firstStartedPulling="2025-11-28 20:55:26.006582798 +0000 UTC m=+365.475230717" lastFinishedPulling="2025-11-28 20:55:27.450297128 +0000 UTC m=+366.918945037" observedRunningTime="2025-11-28 20:55:28.569919951 +0000 UTC m=+368.038567880" watchObservedRunningTime="2025-11-28 20:55:28.574637962 +0000 UTC m=+368.043285881" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.598064 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r"] Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.599329 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.601602 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.601647 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-s47fq" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.604093 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.617676 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw"] Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.618955 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.620660 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.620706 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vzpft" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.620942 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.621355 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r"] Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.622488 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.635325 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-7nn8f"] Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.636276 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.638382 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.640313 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.640910 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-zkknx" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.649453 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw"] Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.688461 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ba16d0a-eadb-46ce-b701-872fd02910a7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.688535 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qcdl\" (UniqueName: \"kubernetes.io/projected/4ba16d0a-eadb-46ce-b701-872fd02910a7-kube-api-access-4qcdl\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.688592 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ba16d0a-eadb-46ce-b701-872fd02910a7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.688710 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ba16d0a-eadb-46ce-b701-872fd02910a7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789312 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789355 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5v6\" (UniqueName: \"kubernetes.io/projected/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-api-access-zz5v6\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-textfile\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789404 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ba16d0a-eadb-46ce-b701-872fd02910a7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789422 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789440 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fxz\" (UniqueName: \"kubernetes.io/projected/61fcb641-32d9-4f2c-84b8-72c841d00e47-kube-api-access-h2fxz\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789456 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/19687044-02bd-4507-a3ed-0d06d7bb0465-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789474 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/61fcb641-32d9-4f2c-84b8-72c841d00e47-metrics-client-ca\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789498 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/19687044-02bd-4507-a3ed-0d06d7bb0465-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-sys\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789533 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qcdl\" (UniqueName: \"kubernetes.io/projected/4ba16d0a-eadb-46ce-b701-872fd02910a7-kube-api-access-4qcdl\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789559 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-root\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789575 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-wtmp\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789597 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ba16d0a-eadb-46ce-b701-872fd02910a7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789637 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-tls\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789656 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ba16d0a-eadb-46ce-b701-872fd02910a7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789673 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.789692 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.791164 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ba16d0a-eadb-46ce-b701-872fd02910a7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.794865 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ba16d0a-eadb-46ce-b701-872fd02910a7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.807117 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ba16d0a-eadb-46ce-b701-872fd02910a7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.815512 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qcdl\" (UniqueName: \"kubernetes.io/projected/4ba16d0a-eadb-46ce-b701-872fd02910a7-kube-api-access-4qcdl\") pod \"openshift-state-metrics-566fddb674-6kf7r\" (UID: \"4ba16d0a-eadb-46ce-b701-872fd02910a7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.890858 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-textfile\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.890907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.890941 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2fxz\" (UniqueName: \"kubernetes.io/projected/61fcb641-32d9-4f2c-84b8-72c841d00e47-kube-api-access-h2fxz\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.890971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/19687044-02bd-4507-a3ed-0d06d7bb0465-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.890992 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/61fcb641-32d9-4f2c-84b8-72c841d00e47-metrics-client-ca\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891025 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/19687044-02bd-4507-a3ed-0d06d7bb0465-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891049 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-sys\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891089 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-root\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891113 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-wtmp\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891149 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-tls\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891175 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891222 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891258 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891289 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5v6\" (UniqueName: \"kubernetes.io/projected/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-api-access-zz5v6\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891313 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-root\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891388 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-wtmp\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891420 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fcb641-32d9-4f2c-84b8-72c841d00e47-sys\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891720 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/61fcb641-32d9-4f2c-84b8-72c841d00e47-metrics-client-ca\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891893 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/19687044-02bd-4507-a3ed-0d06d7bb0465-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.891913 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-textfile\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.892179 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.892551 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/19687044-02bd-4507-a3ed-0d06d7bb0465-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.897109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-tls\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.897643 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.903624 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/61fcb641-32d9-4f2c-84b8-72c841d00e47-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.908020 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5v6\" (UniqueName: \"kubernetes.io/projected/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-api-access-zz5v6\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.910770 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/19687044-02bd-4507-a3ed-0d06d7bb0465-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-kcmdw\" (UID: \"19687044-02bd-4507-a3ed-0d06d7bb0465\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.912419 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.918037 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2fxz\" (UniqueName: \"kubernetes.io/projected/61fcb641-32d9-4f2c-84b8-72c841d00e47-kube-api-access-h2fxz\") pod \"node-exporter-7nn8f\" (UID: \"61fcb641-32d9-4f2c-84b8-72c841d00e47\") " pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.931068 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" Nov 28 20:55:30 crc kubenswrapper[4957]: I1128 20:55:30.951222 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7nn8f" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.155476 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r"] Nov 28 20:55:31 crc kubenswrapper[4957]: W1128 20:55:31.161849 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ba16d0a_eadb_46ce_b701_872fd02910a7.slice/crio-5abe50dfc7b3fd043eafb72c2d238f01e41760fe04958dc4b92fc7297e629f27 WatchSource:0}: Error finding container 5abe50dfc7b3fd043eafb72c2d238f01e41760fe04958dc4b92fc7297e629f27: Status 404 returned error can't find the container with id 5abe50dfc7b3fd043eafb72c2d238f01e41760fe04958dc4b92fc7297e629f27 Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.453801 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw"] Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.562239 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" event={"ID":"19687044-02bd-4507-a3ed-0d06d7bb0465","Type":"ContainerStarted","Data":"9d73cb1c2ceff4c46dc1146f123df98b3748d506a9bd5d7b74b6eb613b42bad4"} Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.563859 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" event={"ID":"4ba16d0a-eadb-46ce-b701-872fd02910a7","Type":"ContainerStarted","Data":"7a6b47a0d4b308bc6d4a1f82b3681f1c45befb99562e860edbdcbd925298762d"} Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.563898 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" event={"ID":"4ba16d0a-eadb-46ce-b701-872fd02910a7","Type":"ContainerStarted","Data":"6434e179a0f6d6b919bc9c776f5657e5485cd73ed6f35ac88d7b3096bcaa0e31"} Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.563909 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" event={"ID":"4ba16d0a-eadb-46ce-b701-872fd02910a7","Type":"ContainerStarted","Data":"5abe50dfc7b3fd043eafb72c2d238f01e41760fe04958dc4b92fc7297e629f27"} Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.565120 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7nn8f" event={"ID":"61fcb641-32d9-4f2c-84b8-72c841d00e47","Type":"ContainerStarted","Data":"d690a131d8b0f60cb1a3f67a95277f7544c530e3b0604d8fe65fd23777a5b8ca"} Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.752879 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.754501 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.756504 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.756725 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.757315 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.757494 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.757658 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-2svzm" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.758451 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.759244 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.760705 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.770189 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.775668 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905361 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3779c48a-69e4-4a84-b9e3-16bd6c13410f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905416 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905484 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zf7\" (UniqueName: \"kubernetes.io/projected/3779c48a-69e4-4a84-b9e3-16bd6c13410f-kube-api-access-w7zf7\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905547 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3779c48a-69e4-4a84-b9e3-16bd6c13410f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905584 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905606 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-config-volume\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905628 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3779c48a-69e4-4a84-b9e3-16bd6c13410f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905696 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-web-config\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905724 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3779c48a-69e4-4a84-b9e3-16bd6c13410f-config-out\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905747 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905789 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3779c48a-69e4-4a84-b9e3-16bd6c13410f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:31 crc kubenswrapper[4957]: I1128 20:55:31.905812 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.006927 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-web-config\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.006990 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3779c48a-69e4-4a84-b9e3-16bd6c13410f-config-out\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007021 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007067 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3779c48a-69e4-4a84-b9e3-16bd6c13410f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007094 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007116 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3779c48a-69e4-4a84-b9e3-16bd6c13410f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007161 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zf7\" (UniqueName: \"kubernetes.io/projected/3779c48a-69e4-4a84-b9e3-16bd6c13410f-kube-api-access-w7zf7\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3779c48a-69e4-4a84-b9e3-16bd6c13410f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007235 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007260 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-config-volume\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007280 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3779c48a-69e4-4a84-b9e3-16bd6c13410f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.007735 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3779c48a-69e4-4a84-b9e3-16bd6c13410f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.009904 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3779c48a-69e4-4a84-b9e3-16bd6c13410f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.011266 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3779c48a-69e4-4a84-b9e3-16bd6c13410f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.014184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.014474 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-web-config\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.015406 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.016283 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3779c48a-69e4-4a84-b9e3-16bd6c13410f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.016282 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-config-volume\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.016387 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.016448 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3779c48a-69e4-4a84-b9e3-16bd6c13410f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.022721 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3779c48a-69e4-4a84-b9e3-16bd6c13410f-config-out\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.026101 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zf7\" (UniqueName: \"kubernetes.io/projected/3779c48a-69e4-4a84-b9e3-16bd6c13410f-kube-api-access-w7zf7\") pod \"alertmanager-main-0\" (UID: \"3779c48a-69e4-4a84-b9e3-16bd6c13410f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.073538 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.573745 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7nn8f" event={"ID":"61fcb641-32d9-4f2c-84b8-72c841d00e47","Type":"ContainerStarted","Data":"f3495ebf6a062c9d10c9d6077e8aa5a864762ac1524ee9e5a33033b2709767ae"} Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.746999 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.790780 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6c5554b445-6pkf9"] Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.792643 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.795872 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.796100 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.796248 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.796400 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-5v3tpvm7cbcn4" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.796766 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.796949 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-dgdxf" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.797090 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.848837 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6c5554b445-6pkf9"] Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922778 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922846 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a5acf65-de7a-40dd-9412-169a8a55f6f6-metrics-client-ca\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922869 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqcfz\" (UniqueName: \"kubernetes.io/projected/1a5acf65-de7a-40dd-9412-169a8a55f6f6-kube-api-access-gqcfz\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922937 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-grpc-tls\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922953 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.922978 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:32 crc kubenswrapper[4957]: I1128 20:55:32.923013 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-tls\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-tls\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024467 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024510 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a5acf65-de7a-40dd-9412-169a8a55f6f6-metrics-client-ca\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024535 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqcfz\" (UniqueName: \"kubernetes.io/projected/1a5acf65-de7a-40dd-9412-169a8a55f6f6-kube-api-access-gqcfz\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024629 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-grpc-tls\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024652 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.024689 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.026371 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a5acf65-de7a-40dd-9412-169a8a55f6f6-metrics-client-ca\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.030814 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-tls\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.031071 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-grpc-tls\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.032454 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.037713 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.039799 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.040042 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a5acf65-de7a-40dd-9412-169a8a55f6f6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.041565 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqcfz\" (UniqueName: \"kubernetes.io/projected/1a5acf65-de7a-40dd-9412-169a8a55f6f6-kube-api-access-gqcfz\") pod \"thanos-querier-6c5554b445-6pkf9\" (UID: \"1a5acf65-de7a-40dd-9412-169a8a55f6f6\") " pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.149796 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:33 crc kubenswrapper[4957]: W1128 20:55:33.153432 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3779c48a_69e4_4a84_b9e3_16bd6c13410f.slice/crio-e8b5bf3005fba2fe87807994f292df3d8489eb3df092264ae9b838c1fd85d3e1 WatchSource:0}: Error finding container e8b5bf3005fba2fe87807994f292df3d8489eb3df092264ae9b838c1fd85d3e1: Status 404 returned error can't find the container with id e8b5bf3005fba2fe87807994f292df3d8489eb3df092264ae9b838c1fd85d3e1 Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.589778 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"e8b5bf3005fba2fe87807994f292df3d8489eb3df092264ae9b838c1fd85d3e1"} Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.592440 4957 generic.go:334] "Generic (PLEG): container finished" podID="61fcb641-32d9-4f2c-84b8-72c841d00e47" containerID="f3495ebf6a062c9d10c9d6077e8aa5a864762ac1524ee9e5a33033b2709767ae" exitCode=0 Nov 28 20:55:33 crc kubenswrapper[4957]: I1128 20:55:33.592475 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7nn8f" event={"ID":"61fcb641-32d9-4f2c-84b8-72c841d00e47","Type":"ContainerDied","Data":"f3495ebf6a062c9d10c9d6077e8aa5a864762ac1524ee9e5a33033b2709767ae"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.068471 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6c5554b445-6pkf9"] Nov 28 20:55:34 crc kubenswrapper[4957]: W1128 20:55:34.083417 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a5acf65_de7a_40dd_9412_169a8a55f6f6.slice/crio-3716d6ab97cbe77893a3caf93591bf374e9291951afa980bce8dc4907f6ad00b WatchSource:0}: Error finding container 3716d6ab97cbe77893a3caf93591bf374e9291951afa980bce8dc4907f6ad00b: Status 404 returned error can't find the container with id 3716d6ab97cbe77893a3caf93591bf374e9291951afa980bce8dc4907f6ad00b Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.600154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" event={"ID":"19687044-02bd-4507-a3ed-0d06d7bb0465","Type":"ContainerStarted","Data":"fb59025fbd1b63837e1d8ac277af559c85b1ddac9a33781b96cb6deae3e9309c"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.600202 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" event={"ID":"19687044-02bd-4507-a3ed-0d06d7bb0465","Type":"ContainerStarted","Data":"fadd03396f000328b5df94969a90a6887a35a5dccc22acf964d4821ea6001f17"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.600223 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" event={"ID":"19687044-02bd-4507-a3ed-0d06d7bb0465","Type":"ContainerStarted","Data":"c0bd59d740b03ea740b284d9ca518520c551460f6c14b027b62ad145c2731ecc"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.604533 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"3716d6ab97cbe77893a3caf93591bf374e9291951afa980bce8dc4907f6ad00b"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.607011 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" event={"ID":"4ba16d0a-eadb-46ce-b701-872fd02910a7","Type":"ContainerStarted","Data":"2f7c17c602c49d8d196401b5431fcf9a463a53f9c809677c531cb88922e01fdd"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.617683 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-kcmdw" podStartSLOduration=2.444237109 podStartE2EDuration="4.617664389s" podCreationTimestamp="2025-11-28 20:55:30 +0000 UTC" firstStartedPulling="2025-11-28 20:55:31.459488825 +0000 UTC m=+370.928136764" lastFinishedPulling="2025-11-28 20:55:33.632916135 +0000 UTC m=+373.101564044" observedRunningTime="2025-11-28 20:55:34.617109755 +0000 UTC m=+374.085757664" watchObservedRunningTime="2025-11-28 20:55:34.617664389 +0000 UTC m=+374.086312298" Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.624755 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7nn8f" event={"ID":"61fcb641-32d9-4f2c-84b8-72c841d00e47","Type":"ContainerStarted","Data":"593b8e900699b80649cc06cbce33b7f8b9abf6220e25680fa57833b30608cc96"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.624796 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7nn8f" event={"ID":"61fcb641-32d9-4f2c-84b8-72c841d00e47","Type":"ContainerStarted","Data":"87c0047fc9f4d164601d878fd8882c55ed48bc02ff43eae60b3169d991c81cec"} Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.638021 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-6kf7r" podStartSLOduration=2.4999453799999998 podStartE2EDuration="4.638000528s" podCreationTimestamp="2025-11-28 20:55:30 +0000 UTC" firstStartedPulling="2025-11-28 20:55:31.488906195 +0000 UTC m=+370.957554104" lastFinishedPulling="2025-11-28 20:55:33.626961343 +0000 UTC m=+373.095609252" observedRunningTime="2025-11-28 20:55:34.634603931 +0000 UTC m=+374.103251830" watchObservedRunningTime="2025-11-28 20:55:34.638000528 +0000 UTC m=+374.106648447" Nov 28 20:55:34 crc kubenswrapper[4957]: I1128 20:55:34.655009 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-7nn8f" podStartSLOduration=3.376804134 podStartE2EDuration="4.654983891s" podCreationTimestamp="2025-11-28 20:55:30 +0000 UTC" firstStartedPulling="2025-11-28 20:55:30.995292437 +0000 UTC m=+370.463940346" lastFinishedPulling="2025-11-28 20:55:32.273472194 +0000 UTC m=+371.742120103" observedRunningTime="2025-11-28 20:55:34.65334865 +0000 UTC m=+374.121996559" watchObservedRunningTime="2025-11-28 20:55:34.654983891 +0000 UTC m=+374.123631800" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.397800 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57f6c6596d-dx9hm"] Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.398925 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.415526 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57f6c6596d-dx9hm"] Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbskt\" (UniqueName: \"kubernetes.io/projected/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-kube-api-access-sbskt\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563106 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-oauth-config\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563139 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-oauth-serving-cert\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563168 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-service-ca\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563204 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-trusted-ca-bundle\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563272 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-config\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.563301 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-serving-cert\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.632326 4957 generic.go:334] "Generic (PLEG): container finished" podID="3779c48a-69e4-4a84-b9e3-16bd6c13410f" containerID="43e80d285c9cdf2eca77488b823399c651d24587586e56dd6c790998fd4dfed1" exitCode=0 Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.632445 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerDied","Data":"43e80d285c9cdf2eca77488b823399c651d24587586e56dd6c790998fd4dfed1"} Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.664163 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-serving-cert\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.664502 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbskt\" (UniqueName: \"kubernetes.io/projected/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-kube-api-access-sbskt\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.664519 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-oauth-config\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.664554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-oauth-serving-cert\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.664630 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-service-ca\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.665445 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-oauth-serving-cert\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.665495 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-service-ca\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.665510 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-trusted-ca-bundle\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.665636 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-config\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.666187 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-config\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.666907 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-trusted-ca-bundle\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.669998 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-oauth-config\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.670973 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-serving-cert\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.685348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbskt\" (UniqueName: \"kubernetes.io/projected/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-kube-api-access-sbskt\") pod \"console-57f6c6596d-dx9hm\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.715057 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.965611 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5556948c44-lkfw8"] Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.966301 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.968692 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.968777 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-8mp54cck23imn" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.968883 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.968943 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.969005 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-knvcm" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.969049 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 28 20:55:35 crc kubenswrapper[4957]: I1128 20:55:35.976674 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5556948c44-lkfw8"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.005918 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57f6c6596d-dx9hm"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.071577 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-client-ca-bundle\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.071626 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.071653 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-secret-metrics-client-certs\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.071828 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-secret-metrics-server-tls\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.071928 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-metrics-server-audit-profiles\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.071959 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6bl\" (UniqueName: \"kubernetes.io/projected/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-kube-api-access-gm6bl\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.072039 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-audit-log\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.172925 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-secret-metrics-server-tls\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.172978 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6bl\" (UniqueName: \"kubernetes.io/projected/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-kube-api-access-gm6bl\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.172995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-metrics-server-audit-profiles\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.173023 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-audit-log\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.173099 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-client-ca-bundle\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.173122 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.173143 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-secret-metrics-client-certs\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.173672 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-audit-log\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.174100 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.174452 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-metrics-server-audit-profiles\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.177846 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-secret-metrics-client-certs\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.177867 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-client-ca-bundle\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.177871 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-secret-metrics-server-tls\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.190773 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6bl\" (UniqueName: \"kubernetes.io/projected/7a03fc5f-fbec-4afd-9917-2141ef75c0e6-kube-api-access-gm6bl\") pod \"metrics-server-5556948c44-lkfw8\" (UID: \"7a03fc5f-fbec-4afd-9917-2141ef75c0e6\") " pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.296303 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.318287 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lrgtn"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.322265 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.324735 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.325730 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrgtn"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.393797 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.395024 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.397234 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.397417 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.406591 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.476539 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d71eea9-30f9-4091-acf2-c7e6e5890b30-catalog-content\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.476592 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d71eea9-30f9-4091-acf2-c7e6e5890b30-utilities\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.476620 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7q72\" (UniqueName: \"kubernetes.io/projected/7d71eea9-30f9-4091-acf2-c7e6e5890b30-kube-api-access-l7q72\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.476926 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/94ad3317-8735-426c-9979-1bf5898c0851-monitoring-plugin-cert\") pod \"monitoring-plugin-75555c8b6f-dvr9m\" (UID: \"94ad3317-8735-426c-9979-1bf5898c0851\") " pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.509076 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hp8gk"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.511690 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.515113 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.516125 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hp8gk"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578106 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7q72\" (UniqueName: \"kubernetes.io/projected/7d71eea9-30f9-4091-acf2-c7e6e5890b30-kube-api-access-l7q72\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578172 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6glvx\" (UniqueName: \"kubernetes.io/projected/f9d7934f-40b4-4156-b9c4-645229f18296-kube-api-access-6glvx\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d7934f-40b4-4156-b9c4-645229f18296-utilities\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578270 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/94ad3317-8735-426c-9979-1bf5898c0851-monitoring-plugin-cert\") pod \"monitoring-plugin-75555c8b6f-dvr9m\" (UID: \"94ad3317-8735-426c-9979-1bf5898c0851\") " pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578308 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d7934f-40b4-4156-b9c4-645229f18296-catalog-content\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578331 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d71eea9-30f9-4091-acf2-c7e6e5890b30-catalog-content\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578377 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d71eea9-30f9-4091-acf2-c7e6e5890b30-utilities\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.578774 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d71eea9-30f9-4091-acf2-c7e6e5890b30-utilities\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.579002 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d71eea9-30f9-4091-acf2-c7e6e5890b30-catalog-content\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.583729 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/94ad3317-8735-426c-9979-1bf5898c0851-monitoring-plugin-cert\") pod \"monitoring-plugin-75555c8b6f-dvr9m\" (UID: \"94ad3317-8735-426c-9979-1bf5898c0851\") " pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.595978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7q72\" (UniqueName: \"kubernetes.io/projected/7d71eea9-30f9-4091-acf2-c7e6e5890b30-kube-api-access-l7q72\") pod \"redhat-marketplace-lrgtn\" (UID: \"7d71eea9-30f9-4091-acf2-c7e6e5890b30\") " pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.640919 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57f6c6596d-dx9hm" event={"ID":"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb","Type":"ContainerStarted","Data":"e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1"} Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.640958 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57f6c6596d-dx9hm" event={"ID":"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb","Type":"ContainerStarted","Data":"c807fe660820096e4318713b9377b41493ebda65f72acf19641da5d4951edd84"} Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.649950 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.673167 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57f6c6596d-dx9hm" podStartSLOduration=1.6731472109999999 podStartE2EDuration="1.673147211s" podCreationTimestamp="2025-11-28 20:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:55:36.667347853 +0000 UTC m=+376.135995782" watchObservedRunningTime="2025-11-28 20:55:36.673147211 +0000 UTC m=+376.141795120" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.679656 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d7934f-40b4-4156-b9c4-645229f18296-utilities\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.679759 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d7934f-40b4-4156-b9c4-645229f18296-catalog-content\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.679824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6glvx\" (UniqueName: \"kubernetes.io/projected/f9d7934f-40b4-4156-b9c4-645229f18296-kube-api-access-6glvx\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.680384 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d7934f-40b4-4156-b9c4-645229f18296-utilities\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.680453 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d7934f-40b4-4156-b9c4-645229f18296-catalog-content\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.701155 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6glvx\" (UniqueName: \"kubernetes.io/projected/f9d7934f-40b4-4156-b9c4-645229f18296-kube-api-access-6glvx\") pod \"redhat-operators-hp8gk\" (UID: \"f9d7934f-40b4-4156-b9c4-645229f18296\") " pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.712346 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5556948c44-lkfw8"] Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.722474 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:36 crc kubenswrapper[4957]: I1128 20:55:36.832927 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.092459 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.094891 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098010 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-kzzml" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098265 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098381 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098499 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098658 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-5s8pq8ej9905g" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098782 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.098906 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.100340 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.100344 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.102840 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.104368 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.104790 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.115028 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.117843 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186412 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186470 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186603 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186634 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186657 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186748 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-web-config\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186785 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc172366-d107-4abc-b635-7bdbd5dd97dc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186814 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186844 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-config\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186867 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc172366-d107-4abc-b635-7bdbd5dd97dc-config-out\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186937 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.186999 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rf5l\" (UniqueName: \"kubernetes.io/projected/fc172366-d107-4abc-b635-7bdbd5dd97dc-kube-api-access-2rf5l\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.187026 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288637 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288664 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288681 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288696 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288730 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288753 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288770 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288799 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288816 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-web-config\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288836 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc172366-d107-4abc-b635-7bdbd5dd97dc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288852 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288875 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-config\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288891 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc172366-d107-4abc-b635-7bdbd5dd97dc-config-out\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288941 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.288959 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rf5l\" (UniqueName: \"kubernetes.io/projected/fc172366-d107-4abc-b635-7bdbd5dd97dc-kube-api-access-2rf5l\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.289997 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.290494 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.291475 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.292871 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc172366-d107-4abc-b635-7bdbd5dd97dc-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.293202 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-config\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.293440 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.294257 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc172366-d107-4abc-b635-7bdbd5dd97dc-config-out\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.294858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.296021 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.296645 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.296909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.301068 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc172366-d107-4abc-b635-7bdbd5dd97dc-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.305119 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.305157 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-web-config\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.305288 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.305517 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.305988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fc172366-d107-4abc-b635-7bdbd5dd97dc-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.307936 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rf5l\" (UniqueName: \"kubernetes.io/projected/fc172366-d107-4abc-b635-7bdbd5dd97dc-kube-api-access-2rf5l\") pod \"prometheus-k8s-0\" (UID: \"fc172366-d107-4abc-b635-7bdbd5dd97dc\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:37 crc kubenswrapper[4957]: I1128 20:55:37.412613 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.176663 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m"] Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.184329 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrgtn"] Nov 28 20:55:38 crc kubenswrapper[4957]: W1128 20:55:38.188916 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94ad3317_8735_426c_9979_1bf5898c0851.slice/crio-cb28b6f03ddd348bc211c1d2e70bfc3d19e373757bb158188534c5d190618652 WatchSource:0}: Error finding container cb28b6f03ddd348bc211c1d2e70bfc3d19e373757bb158188534c5d190618652: Status 404 returned error can't find the container with id cb28b6f03ddd348bc211c1d2e70bfc3d19e373757bb158188534c5d190618652 Nov 28 20:55:38 crc kubenswrapper[4957]: W1128 20:55:38.190192 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d71eea9_30f9_4091_acf2_c7e6e5890b30.slice/crio-fc3803467a8d7baf1be1129416ac8936327a70a35ac2253a84ef5f5e66772f74 WatchSource:0}: Error finding container fc3803467a8d7baf1be1129416ac8936327a70a35ac2253a84ef5f5e66772f74: Status 404 returned error can't find the container with id fc3803467a8d7baf1be1129416ac8936327a70a35ac2253a84ef5f5e66772f74 Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.218357 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 28 20:55:38 crc kubenswrapper[4957]: W1128 20:55:38.224120 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc172366_d107_4abc_b635_7bdbd5dd97dc.slice/crio-af83436ea99b7c5663c01ecc3723a3a591d8081cb2bbd334252fe68da3c4800c WatchSource:0}: Error finding container af83436ea99b7c5663c01ecc3723a3a591d8081cb2bbd334252fe68da3c4800c: Status 404 returned error can't find the container with id af83436ea99b7c5663c01ecc3723a3a591d8081cb2bbd334252fe68da3c4800c Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.317300 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hp8gk"] Nov 28 20:55:38 crc kubenswrapper[4957]: W1128 20:55:38.324460 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d7934f_40b4_4156_b9c4_645229f18296.slice/crio-d25e0770cd84dfd523af52ccaad6a0579805784bf6efdbee5ee0800809fa808f WatchSource:0}: Error finding container d25e0770cd84dfd523af52ccaad6a0579805784bf6efdbee5ee0800809fa808f: Status 404 returned error can't find the container with id d25e0770cd84dfd523af52ccaad6a0579805784bf6efdbee5ee0800809fa808f Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.654805 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9d7934f-40b4-4156-b9c4-645229f18296" containerID="81794c2233a0d2025b75b18593f44695796f3b3c93a784d72acffdfd0bdf4526" exitCode=0 Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.654864 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hp8gk" event={"ID":"f9d7934f-40b4-4156-b9c4-645229f18296","Type":"ContainerDied","Data":"81794c2233a0d2025b75b18593f44695796f3b3c93a784d72acffdfd0bdf4526"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.655185 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hp8gk" event={"ID":"f9d7934f-40b4-4156-b9c4-645229f18296","Type":"ContainerStarted","Data":"d25e0770cd84dfd523af52ccaad6a0579805784bf6efdbee5ee0800809fa808f"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.657129 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" event={"ID":"94ad3317-8735-426c-9979-1bf5898c0851","Type":"ContainerStarted","Data":"cb28b6f03ddd348bc211c1d2e70bfc3d19e373757bb158188534c5d190618652"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.660676 4957 generic.go:334] "Generic (PLEG): container finished" podID="fc172366-d107-4abc-b635-7bdbd5dd97dc" containerID="d91cb9e15a22e735aaa3a71dbad7ad6855483c75e2db7017e6a7d1d6d45dec76" exitCode=0 Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.660767 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerDied","Data":"d91cb9e15a22e735aaa3a71dbad7ad6855483c75e2db7017e6a7d1d6d45dec76"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.660810 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"af83436ea99b7c5663c01ecc3723a3a591d8081cb2bbd334252fe68da3c4800c"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.665410 4957 generic.go:334] "Generic (PLEG): container finished" podID="7d71eea9-30f9-4091-acf2-c7e6e5890b30" containerID="39fa2ff0c03f5d96056112d120128700582003ae6a6e977a1b36ab9df2505b14" exitCode=0 Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.665464 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrgtn" event={"ID":"7d71eea9-30f9-4091-acf2-c7e6e5890b30","Type":"ContainerDied","Data":"39fa2ff0c03f5d96056112d120128700582003ae6a6e977a1b36ab9df2505b14"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.665483 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrgtn" event={"ID":"7d71eea9-30f9-4091-acf2-c7e6e5890b30","Type":"ContainerStarted","Data":"fc3803467a8d7baf1be1129416ac8936327a70a35ac2253a84ef5f5e66772f74"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.667332 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" event={"ID":"7a03fc5f-fbec-4afd-9917-2141ef75c0e6","Type":"ContainerStarted","Data":"2bc2adb52e9822153276660aaba8fc1df3c5f23a2839f1ad4620e05aa2454423"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.673982 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"7f4fd812e17062c29eb407092c095064597a50b82d308f7f75c1a82519d7b82b"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.674024 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"0404843f77619c82bfd436ad68dc1db05a9553be38dac3bf9dbf1a77b4b0b525"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.674037 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"bdc4786eb442a609c7fab89be42802fbcca4eb25eec6d1e5f3ab93351fc4bc8d"} Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.710458 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8kmxc"] Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.711692 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.713129 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.734051 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kmxc"] Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.833760 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-catalog-content\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.833816 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-utilities\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.833943 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz44n\" (UniqueName: \"kubernetes.io/projected/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-kube-api-access-qz44n\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.912084 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b47dz"] Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.913307 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.915183 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.920506 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b47dz"] Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.935648 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-utilities\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.935733 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz44n\" (UniqueName: \"kubernetes.io/projected/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-kube-api-access-qz44n\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.935799 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-catalog-content\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.936495 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-utilities\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.936555 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-catalog-content\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.953864 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz44n\" (UniqueName: \"kubernetes.io/projected/564f67f5-ceaa-4b51-bb95-289d69ab2bdf-kube-api-access-qz44n\") pod \"community-operators-8kmxc\" (UID: \"564f67f5-ceaa-4b51-bb95-289d69ab2bdf\") " pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.992952 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:55:38 crc kubenswrapper[4957]: I1128 20:55:38.993030 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.037588 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zgb\" (UniqueName: \"kubernetes.io/projected/c14378db-11fd-4aa8-ad95-c9531993160a-kube-api-access-t2zgb\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.037659 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14378db-11fd-4aa8-ad95-c9531993160a-catalog-content\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.037837 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14378db-11fd-4aa8-ad95-c9531993160a-utilities\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.037856 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.139984 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14378db-11fd-4aa8-ad95-c9531993160a-utilities\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.140103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zgb\" (UniqueName: \"kubernetes.io/projected/c14378db-11fd-4aa8-ad95-c9531993160a-kube-api-access-t2zgb\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.140149 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14378db-11fd-4aa8-ad95-c9531993160a-catalog-content\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.140595 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c14378db-11fd-4aa8-ad95-c9531993160a-utilities\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.141059 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c14378db-11fd-4aa8-ad95-c9531993160a-catalog-content\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.155954 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zgb\" (UniqueName: \"kubernetes.io/projected/c14378db-11fd-4aa8-ad95-c9531993160a-kube-api-access-t2zgb\") pod \"certified-operators-b47dz\" (UID: \"c14378db-11fd-4aa8-ad95-c9531993160a\") " pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:39 crc kubenswrapper[4957]: I1128 20:55:39.242549 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.365535 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kmxc"] Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.423662 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b47dz"] Nov 28 20:55:40 crc kubenswrapper[4957]: W1128 20:55:40.434925 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14378db_11fd_4aa8_ad95_c9531993160a.slice/crio-5df046fe63d0d89fcd3327465fbfd7e71e9296d3238c895ddc7d6db0fc315942 WatchSource:0}: Error finding container 5df046fe63d0d89fcd3327465fbfd7e71e9296d3238c895ddc7d6db0fc315942: Status 404 returned error can't find the container with id 5df046fe63d0d89fcd3327465fbfd7e71e9296d3238c895ddc7d6db0fc315942 Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.690614 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" event={"ID":"7a03fc5f-fbec-4afd-9917-2141ef75c0e6","Type":"ContainerStarted","Data":"944f1f4692d63d9bb69de07ed1ed5fa84c3ad2ecc45fc837cd14de2d4870f3ec"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.700503 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"c4f27b11cf91e93086b91957188d12dd31904fc9700e74bef77d905365140078"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.700563 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"f3810ce057d4986f1233dc36bea8ded888eaae19e30788c140957887e4838501"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.704661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"f55e460a03b0b0815452f3b307f03618d544479cabe7fe79adba840cb512597a"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.704708 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"e75d67d9461a1310926fae843886a733d5a18f63c078b6027f873373542c84b4"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.718058 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" podStartSLOduration=3.365513872 podStartE2EDuration="5.718042579s" podCreationTimestamp="2025-11-28 20:55:35 +0000 UTC" firstStartedPulling="2025-11-28 20:55:37.652123328 +0000 UTC m=+377.120771227" lastFinishedPulling="2025-11-28 20:55:40.004652025 +0000 UTC m=+379.473299934" observedRunningTime="2025-11-28 20:55:40.710808955 +0000 UTC m=+380.179456864" watchObservedRunningTime="2025-11-28 20:55:40.718042579 +0000 UTC m=+380.186690488" Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.720819 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b47dz" event={"ID":"c14378db-11fd-4aa8-ad95-c9531993160a","Type":"ContainerStarted","Data":"709db8e6dd5b6f80909374def7c80356d11eefa61fc4dceefbaf77cfe5f57cad"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.720886 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b47dz" event={"ID":"c14378db-11fd-4aa8-ad95-c9531993160a","Type":"ContainerStarted","Data":"5df046fe63d0d89fcd3327465fbfd7e71e9296d3238c895ddc7d6db0fc315942"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.729934 4957 generic.go:334] "Generic (PLEG): container finished" podID="564f67f5-ceaa-4b51-bb95-289d69ab2bdf" containerID="c6e44f77b1fa9ac8068dc0c8fa0351f422deac979e999866d6dcaf6eb1e9ba43" exitCode=0 Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.730184 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kmxc" event={"ID":"564f67f5-ceaa-4b51-bb95-289d69ab2bdf","Type":"ContainerDied","Data":"c6e44f77b1fa9ac8068dc0c8fa0351f422deac979e999866d6dcaf6eb1e9ba43"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.730227 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kmxc" event={"ID":"564f67f5-ceaa-4b51-bb95-289d69ab2bdf","Type":"ContainerStarted","Data":"94a644a912dd5ba0445e1d8cbbfc420f97bae6d99871643cdabe65cbc0cab1b3"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.732860 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hp8gk" event={"ID":"f9d7934f-40b4-4156-b9c4-645229f18296","Type":"ContainerStarted","Data":"ea47758d8d7d24f09a7a5f1672b9f06eef51d3fce32b39097f7bc02c25647da7"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.746680 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" event={"ID":"94ad3317-8735-426c-9979-1bf5898c0851","Type":"ContainerStarted","Data":"c374398249cc70a11df014ae6bb60a86abd85ee9f627662c5c7e9575f971b4db"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.747956 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.763614 4957 generic.go:334] "Generic (PLEG): container finished" podID="7d71eea9-30f9-4091-acf2-c7e6e5890b30" containerID="321b87d4323a21484fb1366529914f02ec2f31ec7e6be81d75b4ccc899a7ef2c" exitCode=0 Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.763670 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrgtn" event={"ID":"7d71eea9-30f9-4091-acf2-c7e6e5890b30","Type":"ContainerDied","Data":"321b87d4323a21484fb1366529914f02ec2f31ec7e6be81d75b4ccc899a7ef2c"} Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.763741 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" Nov 28 20:55:40 crc kubenswrapper[4957]: I1128 20:55:40.804423 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-75555c8b6f-dvr9m" podStartSLOduration=2.923514773 podStartE2EDuration="4.804402392s" podCreationTimestamp="2025-11-28 20:55:36 +0000 UTC" firstStartedPulling="2025-11-28 20:55:38.191429412 +0000 UTC m=+377.660077321" lastFinishedPulling="2025-11-28 20:55:40.072317031 +0000 UTC m=+379.540964940" observedRunningTime="2025-11-28 20:55:40.801316583 +0000 UTC m=+380.269964502" watchObservedRunningTime="2025-11-28 20:55:40.804402392 +0000 UTC m=+380.273050301" Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.782957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"1e4fab64c04fd44123954db2697b5777994676448ef8e2f2945402ad476a8f4c"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.783238 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"8423720349a84a7aa535a601f60410655e12a86eff7b8fbf65192431d417f8c1"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.783249 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"f71ec59bc72f284cb02ed6744e5caec3196557a224fbdc904c9906a5fc07c320"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.783257 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3779c48a-69e4-4a84-b9e3-16bd6c13410f","Type":"ContainerStarted","Data":"63d94d22635c7d89146a8ea9c2a610f9a58910419082a9a5f035f59d7a7bd2fd"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.784431 4957 generic.go:334] "Generic (PLEG): container finished" podID="c14378db-11fd-4aa8-ad95-c9531993160a" containerID="709db8e6dd5b6f80909374def7c80356d11eefa61fc4dceefbaf77cfe5f57cad" exitCode=0 Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.784488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b47dz" event={"ID":"c14378db-11fd-4aa8-ad95-c9531993160a","Type":"ContainerDied","Data":"709db8e6dd5b6f80909374def7c80356d11eefa61fc4dceefbaf77cfe5f57cad"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.787237 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9d7934f-40b4-4156-b9c4-645229f18296" containerID="ea47758d8d7d24f09a7a5f1672b9f06eef51d3fce32b39097f7bc02c25647da7" exitCode=0 Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.787259 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hp8gk" event={"ID":"f9d7934f-40b4-4156-b9c4-645229f18296","Type":"ContainerDied","Data":"ea47758d8d7d24f09a7a5f1672b9f06eef51d3fce32b39097f7bc02c25647da7"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.789437 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrgtn" event={"ID":"7d71eea9-30f9-4091-acf2-c7e6e5890b30","Type":"ContainerStarted","Data":"fa4f1f5b4ed1bf152e32021fd29dcfc08643a667046abfd75fccb2506fa2e838"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.833660 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.010084967 podStartE2EDuration="10.833644671s" podCreationTimestamp="2025-11-28 20:55:31 +0000 UTC" firstStartedPulling="2025-11-28 20:55:33.155004326 +0000 UTC m=+372.623652235" lastFinishedPulling="2025-11-28 20:55:39.97856403 +0000 UTC m=+379.447211939" observedRunningTime="2025-11-28 20:55:41.816320299 +0000 UTC m=+381.284968208" watchObservedRunningTime="2025-11-28 20:55:41.833644671 +0000 UTC m=+381.302292580" Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.850013 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" event={"ID":"1a5acf65-de7a-40dd-9412-169a8a55f6f6","Type":"ContainerStarted","Data":"3352e3c26a245d2fcb9f009e833622daf958fdc9ef2ad85eb143b051e3d505d6"} Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.856818 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lrgtn" podStartSLOduration=3.661290898 podStartE2EDuration="5.856801531s" podCreationTimestamp="2025-11-28 20:55:36 +0000 UTC" firstStartedPulling="2025-11-28 20:55:38.983481632 +0000 UTC m=+378.452129571" lastFinishedPulling="2025-11-28 20:55:41.178992295 +0000 UTC m=+380.647640204" observedRunningTime="2025-11-28 20:55:41.849863924 +0000 UTC m=+381.318511833" watchObservedRunningTime="2025-11-28 20:55:41.856801531 +0000 UTC m=+381.325449440" Nov 28 20:55:41 crc kubenswrapper[4957]: I1128 20:55:41.899057 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" podStartSLOduration=3.9409106659999997 podStartE2EDuration="9.899038298s" podCreationTimestamp="2025-11-28 20:55:32 +0000 UTC" firstStartedPulling="2025-11-28 20:55:34.088469833 +0000 UTC m=+373.557117742" lastFinishedPulling="2025-11-28 20:55:40.046597475 +0000 UTC m=+379.515245374" observedRunningTime="2025-11-28 20:55:41.892028829 +0000 UTC m=+381.360676758" watchObservedRunningTime="2025-11-28 20:55:41.899038298 +0000 UTC m=+381.367686207" Nov 28 20:55:42 crc kubenswrapper[4957]: I1128 20:55:42.698642 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-shng4" Nov 28 20:55:42 crc kubenswrapper[4957]: I1128 20:55:42.768359 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jx2ts"] Nov 28 20:55:42 crc kubenswrapper[4957]: I1128 20:55:42.860073 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.164081 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6c5554b445-6pkf9" Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.868766 4957 generic.go:334] "Generic (PLEG): container finished" podID="c14378db-11fd-4aa8-ad95-c9531993160a" containerID="32485222647bfe2f5918c56fef163542b8d4b5a02b72d432c5e150e91440e527" exitCode=0 Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.868815 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b47dz" event={"ID":"c14378db-11fd-4aa8-ad95-c9531993160a","Type":"ContainerDied","Data":"32485222647bfe2f5918c56fef163542b8d4b5a02b72d432c5e150e91440e527"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.871639 4957 generic.go:334] "Generic (PLEG): container finished" podID="564f67f5-ceaa-4b51-bb95-289d69ab2bdf" containerID="aec61dda8b077ba65f09ace7dec6a644aa2b8a5be320ab6d37cb63749c384cda" exitCode=0 Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.871691 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kmxc" event={"ID":"564f67f5-ceaa-4b51-bb95-289d69ab2bdf","Type":"ContainerDied","Data":"aec61dda8b077ba65f09ace7dec6a644aa2b8a5be320ab6d37cb63749c384cda"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.873736 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hp8gk" event={"ID":"f9d7934f-40b4-4156-b9c4-645229f18296","Type":"ContainerStarted","Data":"f6ab8a912310817f520ee8f2c7bbdde0ea50cffdf628b20c5b9dbcf6663e006f"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.885957 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"48258d776849d1306ca9b5547e7f8d298b64964e7c7c4499cb023b694782a6d1"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.886000 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"5758ad076a628f8b24ee60b25a531a904eba77fd31bc55c91d437d3095cd0c60"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.886023 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"11d0e98cfd4650c2fd2731199b237c79c3a5caf7125ff4c64043e60af85fa112"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.886033 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"b0a1507aceeb8415871208b8dff7b38a32620b2731d9edd09c2ea78801df8782"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.886042 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"34d4b087a366cd98f3b733b59594792a928daeab8e259aea9b1be0842c03df78"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.886051 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"fc172366-d107-4abc-b635-7bdbd5dd97dc","Type":"ContainerStarted","Data":"8edf92e87bed6c307dddbcbe851dbf54b77e29a30cdf06d2c749bfa880472b27"} Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.929966 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hp8gk" podStartSLOduration=3.919405541 podStartE2EDuration="7.929943133s" podCreationTimestamp="2025-11-28 20:55:36 +0000 UTC" firstStartedPulling="2025-11-28 20:55:38.9834269 +0000 UTC m=+378.452074809" lastFinishedPulling="2025-11-28 20:55:42.993964492 +0000 UTC m=+382.462612401" observedRunningTime="2025-11-28 20:55:43.924388702 +0000 UTC m=+383.393036621" watchObservedRunningTime="2025-11-28 20:55:43.929943133 +0000 UTC m=+383.398591042" Nov 28 20:55:43 crc kubenswrapper[4957]: I1128 20:55:43.965981 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.995202354 podStartE2EDuration="6.965958862s" podCreationTimestamp="2025-11-28 20:55:37 +0000 UTC" firstStartedPulling="2025-11-28 20:55:38.662008893 +0000 UTC m=+378.130656802" lastFinishedPulling="2025-11-28 20:55:42.632765401 +0000 UTC m=+382.101413310" observedRunningTime="2025-11-28 20:55:43.961456677 +0000 UTC m=+383.430104626" watchObservedRunningTime="2025-11-28 20:55:43.965958862 +0000 UTC m=+383.434606811" Nov 28 20:55:44 crc kubenswrapper[4957]: I1128 20:55:44.889264 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kmxc" event={"ID":"564f67f5-ceaa-4b51-bb95-289d69ab2bdf","Type":"ContainerStarted","Data":"e6485f4e293393910c073ec80ff04d80eb71241a664154ca62fbfd839a040a18"} Nov 28 20:55:44 crc kubenswrapper[4957]: I1128 20:55:44.891547 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b47dz" event={"ID":"c14378db-11fd-4aa8-ad95-c9531993160a","Type":"ContainerStarted","Data":"a32bebd1b87cd92272e271d88a6fa4afc27578e3112be89ad97aef4eddeda1d5"} Nov 28 20:55:44 crc kubenswrapper[4957]: I1128 20:55:44.912304 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8kmxc" podStartSLOduration=3.258439702 podStartE2EDuration="6.912287407s" podCreationTimestamp="2025-11-28 20:55:38 +0000 UTC" firstStartedPulling="2025-11-28 20:55:40.731883572 +0000 UTC m=+380.200531481" lastFinishedPulling="2025-11-28 20:55:44.385731277 +0000 UTC m=+383.854379186" observedRunningTime="2025-11-28 20:55:44.910408639 +0000 UTC m=+384.379056558" watchObservedRunningTime="2025-11-28 20:55:44.912287407 +0000 UTC m=+384.380935326" Nov 28 20:55:44 crc kubenswrapper[4957]: I1128 20:55:44.928782 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b47dz" podStartSLOduration=3.351437454 podStartE2EDuration="6.928759807s" podCreationTimestamp="2025-11-28 20:55:38 +0000 UTC" firstStartedPulling="2025-11-28 20:55:40.723744075 +0000 UTC m=+380.192391984" lastFinishedPulling="2025-11-28 20:55:44.301066428 +0000 UTC m=+383.769714337" observedRunningTime="2025-11-28 20:55:44.925547025 +0000 UTC m=+384.394194934" watchObservedRunningTime="2025-11-28 20:55:44.928759807 +0000 UTC m=+384.397407716" Nov 28 20:55:45 crc kubenswrapper[4957]: I1128 20:55:45.715200 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:45 crc kubenswrapper[4957]: I1128 20:55:45.715280 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:45 crc kubenswrapper[4957]: I1128 20:55:45.720974 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:45 crc kubenswrapper[4957]: I1128 20:55:45.927202 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.010996 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6p7fc"] Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.650385 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.650577 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.714417 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.833141 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.833386 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:46 crc kubenswrapper[4957]: I1128 20:55:46.943498 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lrgtn" Nov 28 20:55:47 crc kubenswrapper[4957]: I1128 20:55:47.414106 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:55:47 crc kubenswrapper[4957]: I1128 20:55:47.878098 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hp8gk" podUID="f9d7934f-40b4-4156-b9c4-645229f18296" containerName="registry-server" probeResult="failure" output=< Nov 28 20:55:47 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 20:55:47 crc kubenswrapper[4957]: > Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.038670 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.039033 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.086304 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.243391 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.243468 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.285659 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.962107 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8kmxc" Nov 28 20:55:49 crc kubenswrapper[4957]: I1128 20:55:49.967552 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b47dz" Nov 28 20:55:56 crc kubenswrapper[4957]: I1128 20:55:56.296959 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:56 crc kubenswrapper[4957]: I1128 20:55:56.298051 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:55:56 crc kubenswrapper[4957]: I1128 20:55:56.867237 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:55:56 crc kubenswrapper[4957]: I1128 20:55:56.907180 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hp8gk" Nov 28 20:56:07 crc kubenswrapper[4957]: I1128 20:56:07.833512 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" podUID="a876c4a2-51d7-4d80-a6f1-9111850bf727" containerName="registry" containerID="cri-o://bc4491953b9e1cb3bf843d6c9b98e297420fd500cc1fb6ff5a15b5fa7dbe0a16" gracePeriod=30 Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.044244 4957 generic.go:334] "Generic (PLEG): container finished" podID="a876c4a2-51d7-4d80-a6f1-9111850bf727" containerID="bc4491953b9e1cb3bf843d6c9b98e297420fd500cc1fb6ff5a15b5fa7dbe0a16" exitCode=0 Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.044333 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" event={"ID":"a876c4a2-51d7-4d80-a6f1-9111850bf727","Type":"ContainerDied","Data":"bc4491953b9e1cb3bf843d6c9b98e297420fd500cc1fb6ff5a15b5fa7dbe0a16"} Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.266783 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.416851 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a876c4a2-51d7-4d80-a6f1-9111850bf727-ca-trust-extracted\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.416924 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a876c4a2-51d7-4d80-a6f1-9111850bf727-installation-pull-secrets\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.416985 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-certificates\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.417011 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-tls\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.417032 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjqnb\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-kube-api-access-vjqnb\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.417058 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-trusted-ca\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.417188 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.417231 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-bound-sa-token\") pod \"a876c4a2-51d7-4d80-a6f1-9111850bf727\" (UID: \"a876c4a2-51d7-4d80-a6f1-9111850bf727\") " Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.417906 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.418420 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.423598 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.423944 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a876c4a2-51d7-4d80-a6f1-9111850bf727-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.424159 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-kube-api-access-vjqnb" (OuterVolumeSpecName: "kube-api-access-vjqnb") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "kube-api-access-vjqnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.428488 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.433071 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.438245 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a876c4a2-51d7-4d80-a6f1-9111850bf727-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a876c4a2-51d7-4d80-a6f1-9111850bf727" (UID: "a876c4a2-51d7-4d80-a6f1-9111850bf727"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518802 4957 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518834 4957 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a876c4a2-51d7-4d80-a6f1-9111850bf727-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518844 4957 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a876c4a2-51d7-4d80-a6f1-9111850bf727-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518857 4957 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518868 4957 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518876 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjqnb\" (UniqueName: \"kubernetes.io/projected/a876c4a2-51d7-4d80-a6f1-9111850bf727-kube-api-access-vjqnb\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.518884 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a876c4a2-51d7-4d80-a6f1-9111850bf727-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.993482 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:56:08 crc kubenswrapper[4957]: I1128 20:56:08.994052 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:56:09 crc kubenswrapper[4957]: I1128 20:56:09.056596 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" event={"ID":"a876c4a2-51d7-4d80-a6f1-9111850bf727","Type":"ContainerDied","Data":"d8d107e0abaf4008c229623c7aa46b0e9864e29b7de54676442436071f9f3ed3"} Nov 28 20:56:09 crc kubenswrapper[4957]: I1128 20:56:09.056667 4957 scope.go:117] "RemoveContainer" containerID="bc4491953b9e1cb3bf843d6c9b98e297420fd500cc1fb6ff5a15b5fa7dbe0a16" Nov 28 20:56:09 crc kubenswrapper[4957]: I1128 20:56:09.056701 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jx2ts" Nov 28 20:56:09 crc kubenswrapper[4957]: I1128 20:56:09.089108 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jx2ts"] Nov 28 20:56:09 crc kubenswrapper[4957]: I1128 20:56:09.089165 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jx2ts"] Nov 28 20:56:10 crc kubenswrapper[4957]: I1128 20:56:10.824156 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a876c4a2-51d7-4d80-a6f1-9111850bf727" path="/var/lib/kubelet/pods/a876c4a2-51d7-4d80-a6f1-9111850bf727/volumes" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.067605 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6p7fc" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerName="console" containerID="cri-o://1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58" gracePeriod=15 Nov 28 20:56:11 crc kubenswrapper[4957]: E1128 20:56:11.220227 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0bd2e1c_be2b_4cfd_a526_4ce2f3a1fbcc.slice/crio-conmon-1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58.scope\": RecentStats: unable to find data in memory cache]" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.408203 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6p7fc_d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc/console/0.log" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.408272 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564705 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjxjp\" (UniqueName: \"kubernetes.io/projected/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-kube-api-access-tjxjp\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564751 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-serving-cert\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564794 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-trusted-ca-bundle\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564816 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-service-ca\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564840 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-oauth-serving-cert\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564892 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-config\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.564913 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-oauth-config\") pod \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\" (UID: \"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc\") " Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.565651 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-service-ca" (OuterVolumeSpecName: "service-ca") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.565972 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.566160 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.566181 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-config" (OuterVolumeSpecName: "console-config") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.569957 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.570109 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-kube-api-access-tjxjp" (OuterVolumeSpecName: "kube-api-access-tjxjp") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "kube-api-access-tjxjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.570312 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" (UID: "d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666121 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666162 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666176 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666184 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666195 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjxjp\" (UniqueName: \"kubernetes.io/projected/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-kube-api-access-tjxjp\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666203 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:11 crc kubenswrapper[4957]: I1128 20:56:11.666227 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.079894 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6p7fc_d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc/console/0.log" Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.081050 4957 generic.go:334] "Generic (PLEG): container finished" podID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerID="1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58" exitCode=2 Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.081086 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6p7fc" event={"ID":"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc","Type":"ContainerDied","Data":"1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58"} Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.081112 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6p7fc" event={"ID":"d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc","Type":"ContainerDied","Data":"c5cb7ec476f11e7c7e3f9b42cc9555737a7f4f9a35337256230181743be3d9ee"} Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.081128 4957 scope.go:117] "RemoveContainer" containerID="1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58" Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.081175 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6p7fc" Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.113011 4957 scope.go:117] "RemoveContainer" containerID="1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58" Nov 28 20:56:12 crc kubenswrapper[4957]: E1128 20:56:12.113698 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58\": container with ID starting with 1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58 not found: ID does not exist" containerID="1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58" Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.113750 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58"} err="failed to get container status \"1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58\": rpc error: code = NotFound desc = could not find container \"1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58\": container with ID starting with 1bb2a9c21d23a1c0238f34b4d6418c76fdbf9a5045dd05266f9666796442ee58 not found: ID does not exist" Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.127956 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6p7fc"] Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.137035 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6p7fc"] Nov 28 20:56:12 crc kubenswrapper[4957]: I1128 20:56:12.825777 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" path="/var/lib/kubelet/pods/d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc/volumes" Nov 28 20:56:16 crc kubenswrapper[4957]: I1128 20:56:16.305841 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:56:16 crc kubenswrapper[4957]: I1128 20:56:16.316286 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5556948c44-lkfw8" Nov 28 20:56:37 crc kubenswrapper[4957]: I1128 20:56:37.414169 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:56:37 crc kubenswrapper[4957]: I1128 20:56:37.464770 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:56:38 crc kubenswrapper[4957]: I1128 20:56:38.283010 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:38.999295 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:38.999822 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:38.999919 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:39.001655 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49d72eb6f95332880907d7163c6aa8e342e2f384f389a618e0900e3a1f6ad954"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:39.001907 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://49d72eb6f95332880907d7163c6aa8e342e2f384f389a618e0900e3a1f6ad954" gracePeriod=600 Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:39.265615 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="49d72eb6f95332880907d7163c6aa8e342e2f384f389a618e0900e3a1f6ad954" exitCode=0 Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:39.266453 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"49d72eb6f95332880907d7163c6aa8e342e2f384f389a618e0900e3a1f6ad954"} Nov 28 20:56:39 crc kubenswrapper[4957]: I1128 20:56:39.266519 4957 scope.go:117] "RemoveContainer" containerID="0c4ef86fb0ec519dc696c6cc2b8c1bb6ed44f5eeb29023f0154ff3ef6f485eeb" Nov 28 20:56:40 crc kubenswrapper[4957]: I1128 20:56:40.274869 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"c63f94cbbd2cbb566735f5bef0248b92c87139d3c7c4737482b0b18954789c4e"} Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.029686 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-678cb99789-gjf6b"] Nov 28 20:57:03 crc kubenswrapper[4957]: E1128 20:57:03.031165 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a876c4a2-51d7-4d80-a6f1-9111850bf727" containerName="registry" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.031189 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a876c4a2-51d7-4d80-a6f1-9111850bf727" containerName="registry" Nov 28 20:57:03 crc kubenswrapper[4957]: E1128 20:57:03.031246 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerName="console" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.031266 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerName="console" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.031436 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bd2e1c-be2b-4cfd-a526-4ce2f3a1fbcc" containerName="console" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.031462 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a876c4a2-51d7-4d80-a6f1-9111850bf727" containerName="registry" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.032245 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.047986 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-678cb99789-gjf6b"] Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115542 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-console-config\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115621 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-service-ca\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115645 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-serving-cert\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115664 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-oauth-serving-cert\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115748 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-trusted-ca-bundle\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115777 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-oauth-config\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.115799 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fbck\" (UniqueName: \"kubernetes.io/projected/2009e762-b278-439e-9868-b694415b4b9f-kube-api-access-8fbck\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.216825 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-service-ca\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.216880 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-serving-cert\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.216907 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-oauth-serving-cert\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.216968 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-trusted-ca-bundle\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.216993 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-oauth-config\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.217020 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fbck\" (UniqueName: \"kubernetes.io/projected/2009e762-b278-439e-9868-b694415b4b9f-kube-api-access-8fbck\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.217051 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-console-config\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.217952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-console-config\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.217952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-service-ca\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.218795 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-trusted-ca-bundle\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.219415 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-oauth-serving-cert\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.223675 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-serving-cert\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.224006 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-oauth-config\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.235552 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fbck\" (UniqueName: \"kubernetes.io/projected/2009e762-b278-439e-9868-b694415b4b9f-kube-api-access-8fbck\") pod \"console-678cb99789-gjf6b\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.352935 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:03 crc kubenswrapper[4957]: I1128 20:57:03.769744 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-678cb99789-gjf6b"] Nov 28 20:57:04 crc kubenswrapper[4957]: I1128 20:57:04.418595 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678cb99789-gjf6b" event={"ID":"2009e762-b278-439e-9868-b694415b4b9f","Type":"ContainerStarted","Data":"30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d"} Nov 28 20:57:04 crc kubenswrapper[4957]: I1128 20:57:04.418952 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678cb99789-gjf6b" event={"ID":"2009e762-b278-439e-9868-b694415b4b9f","Type":"ContainerStarted","Data":"313b511a8764a709e7fa3de647628f471e51829a18722b487a620afeea15da6d"} Nov 28 20:57:13 crc kubenswrapper[4957]: I1128 20:57:13.353411 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:13 crc kubenswrapper[4957]: I1128 20:57:13.354276 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:13 crc kubenswrapper[4957]: I1128 20:57:13.363113 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:13 crc kubenswrapper[4957]: I1128 20:57:13.385073 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-678cb99789-gjf6b" podStartSLOduration=10.385053528 podStartE2EDuration="10.385053528s" podCreationTimestamp="2025-11-28 20:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 20:57:04.439785073 +0000 UTC m=+463.908433022" watchObservedRunningTime="2025-11-28 20:57:13.385053528 +0000 UTC m=+472.853701447" Nov 28 20:57:13 crc kubenswrapper[4957]: I1128 20:57:13.484489 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 20:57:13 crc kubenswrapper[4957]: I1128 20:57:13.559925 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57f6c6596d-dx9hm"] Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.623524 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57f6c6596d-dx9hm" podUID="3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" containerName="console" containerID="cri-o://e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1" gracePeriod=15 Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.938453 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57f6c6596d-dx9hm_3a50ca1c-3512-41ec-b9f8-f7d43f5742eb/console/0.log" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.938764 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968072 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-serving-cert\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968137 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-config\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968221 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbskt\" (UniqueName: \"kubernetes.io/projected/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-kube-api-access-sbskt\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968282 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-trusted-ca-bundle\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968327 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-service-ca\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-oauth-serving-cert\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.968374 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-oauth-config\") pod \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\" (UID: \"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb\") " Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.969113 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.969142 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-config" (OuterVolumeSpecName: "console-config") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.969169 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.969180 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-service-ca" (OuterVolumeSpecName: "service-ca") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.974135 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.974234 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 20:57:38 crc kubenswrapper[4957]: I1128 20:57:38.974199 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-kube-api-access-sbskt" (OuterVolumeSpecName: "kube-api-access-sbskt") pod "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" (UID: "3a50ca1c-3512-41ec-b9f8-f7d43f5742eb"). InnerVolumeSpecName "kube-api-access-sbskt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069728 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069766 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069781 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069790 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069799 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069809 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbskt\" (UniqueName: \"kubernetes.io/projected/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-kube-api-access-sbskt\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.069817 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.651253 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57f6c6596d-dx9hm_3a50ca1c-3512-41ec-b9f8-f7d43f5742eb/console/0.log" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.651326 4957 generic.go:334] "Generic (PLEG): container finished" podID="3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" containerID="e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1" exitCode=2 Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.651367 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57f6c6596d-dx9hm" event={"ID":"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb","Type":"ContainerDied","Data":"e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1"} Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.651397 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57f6c6596d-dx9hm" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.651424 4957 scope.go:117] "RemoveContainer" containerID="e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.651408 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57f6c6596d-dx9hm" event={"ID":"3a50ca1c-3512-41ec-b9f8-f7d43f5742eb","Type":"ContainerDied","Data":"c807fe660820096e4318713b9377b41493ebda65f72acf19641da5d4951edd84"} Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.667591 4957 scope.go:117] "RemoveContainer" containerID="e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1" Nov 28 20:57:39 crc kubenswrapper[4957]: E1128 20:57:39.668006 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1\": container with ID starting with e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1 not found: ID does not exist" containerID="e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.668037 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1"} err="failed to get container status \"e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1\": rpc error: code = NotFound desc = could not find container \"e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1\": container with ID starting with e3031c019afb4db37fb7c165aca87dbe4e6d0e44cae9e8ac16823e21c23b9df1 not found: ID does not exist" Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.688951 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57f6c6596d-dx9hm"] Nov 28 20:57:39 crc kubenswrapper[4957]: I1128 20:57:39.694444 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57f6c6596d-dx9hm"] Nov 28 20:57:40 crc kubenswrapper[4957]: I1128 20:57:40.820988 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" path="/var/lib/kubelet/pods/3a50ca1c-3512-41ec-b9f8-f7d43f5742eb/volumes" Nov 28 20:59:08 crc kubenswrapper[4957]: I1128 20:59:08.992946 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:59:08 crc kubenswrapper[4957]: I1128 20:59:08.993473 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:59:38 crc kubenswrapper[4957]: I1128 20:59:38.992670 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 20:59:38 crc kubenswrapper[4957]: I1128 20:59:38.993254 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.761511 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv"] Nov 28 20:59:55 crc kubenswrapper[4957]: E1128 20:59:55.763109 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" containerName="console" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.763178 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" containerName="console" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.763384 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a50ca1c-3512-41ec-b9f8-f7d43f5742eb" containerName="console" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.764224 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.766661 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.778181 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv"] Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.944672 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.945061 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:55 crc kubenswrapper[4957]: I1128 20:59:55.945099 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6lrs\" (UniqueName: \"kubernetes.io/projected/9180c900-b668-4bb3-89b2-8b6018f6de18-kube-api-access-m6lrs\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.045940 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.045995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6lrs\" (UniqueName: \"kubernetes.io/projected/9180c900-b668-4bb3-89b2-8b6018f6de18-kube-api-access-m6lrs\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.046090 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.046542 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.046665 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.062530 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6lrs\" (UniqueName: \"kubernetes.io/projected/9180c900-b668-4bb3-89b2-8b6018f6de18-kube-api-access-m6lrs\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.083174 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.478349 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv"] Nov 28 20:59:56 crc kubenswrapper[4957]: I1128 20:59:56.508935 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" event={"ID":"9180c900-b668-4bb3-89b2-8b6018f6de18","Type":"ContainerStarted","Data":"f684c500e422452b51ffb29a32d074f0304ad65164b0b7600dab65861f7dba81"} Nov 28 20:59:57 crc kubenswrapper[4957]: I1128 20:59:57.515026 4957 generic.go:334] "Generic (PLEG): container finished" podID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerID="c1fe50f2db1952ff08f6b6e704d639c6c95dd3d78c9f6489035317da89de5179" exitCode=0 Nov 28 20:59:57 crc kubenswrapper[4957]: I1128 20:59:57.515115 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" event={"ID":"9180c900-b668-4bb3-89b2-8b6018f6de18","Type":"ContainerDied","Data":"c1fe50f2db1952ff08f6b6e704d639c6c95dd3d78c9f6489035317da89de5179"} Nov 28 20:59:57 crc kubenswrapper[4957]: I1128 20:59:57.517824 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 20:59:59 crc kubenswrapper[4957]: I1128 20:59:59.531647 4957 generic.go:334] "Generic (PLEG): container finished" podID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerID="ec5e517df3c17e6cad4649ff84f74190d689ca9a520bcf562b4f6e0ada8d3e82" exitCode=0 Nov 28 20:59:59 crc kubenswrapper[4957]: I1128 20:59:59.531718 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" event={"ID":"9180c900-b668-4bb3-89b2-8b6018f6de18","Type":"ContainerDied","Data":"ec5e517df3c17e6cad4649ff84f74190d689ca9a520bcf562b4f6e0ada8d3e82"} Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.157964 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb"] Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.160029 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.168530 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb"] Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.197269 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.197523 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.299646 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-secret-volume\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.299815 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-config-volume\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.299902 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqv92\" (UniqueName: \"kubernetes.io/projected/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-kube-api-access-xqv92\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.400945 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqv92\" (UniqueName: \"kubernetes.io/projected/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-kube-api-access-xqv92\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.401058 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-secret-volume\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.401102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-config-volume\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.401977 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-config-volume\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.407692 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-secret-volume\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.418494 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqv92\" (UniqueName: \"kubernetes.io/projected/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-kube-api-access-xqv92\") pod \"collect-profiles-29406060-29bqb\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.508191 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.540410 4957 generic.go:334] "Generic (PLEG): container finished" podID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerID="148e7829e9481122c3163a342b7be91f165bea65121c9471da550393d3fe4d79" exitCode=0 Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.540456 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" event={"ID":"9180c900-b668-4bb3-89b2-8b6018f6de18","Type":"ContainerDied","Data":"148e7829e9481122c3163a342b7be91f165bea65121c9471da550393d3fe4d79"} Nov 28 21:00:00 crc kubenswrapper[4957]: I1128 21:00:00.911470 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb"] Nov 28 21:00:00 crc kubenswrapper[4957]: W1128 21:00:00.917184 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5f7dc1c_d499_496d_a1d6_f6677e8a6865.slice/crio-86b66517fc6139a80418091f4a57aa916ddc69b8c629fb63e7cd3f796a180dca WatchSource:0}: Error finding container 86b66517fc6139a80418091f4a57aa916ddc69b8c629fb63e7cd3f796a180dca: Status 404 returned error can't find the container with id 86b66517fc6139a80418091f4a57aa916ddc69b8c629fb63e7cd3f796a180dca Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.547183 4957 generic.go:334] "Generic (PLEG): container finished" podID="c5f7dc1c-d499-496d-a1d6-f6677e8a6865" containerID="5ee24ee3c5793df8be16dd44ff52273446b3134b21b9d36121a5ad5cd7059d59" exitCode=0 Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.547254 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" event={"ID":"c5f7dc1c-d499-496d-a1d6-f6677e8a6865","Type":"ContainerDied","Data":"5ee24ee3c5793df8be16dd44ff52273446b3134b21b9d36121a5ad5cd7059d59"} Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.547295 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" event={"ID":"c5f7dc1c-d499-496d-a1d6-f6677e8a6865","Type":"ContainerStarted","Data":"86b66517fc6139a80418091f4a57aa916ddc69b8c629fb63e7cd3f796a180dca"} Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.787539 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.920586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-bundle\") pod \"9180c900-b668-4bb3-89b2-8b6018f6de18\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.920725 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-util\") pod \"9180c900-b668-4bb3-89b2-8b6018f6de18\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.920777 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6lrs\" (UniqueName: \"kubernetes.io/projected/9180c900-b668-4bb3-89b2-8b6018f6de18-kube-api-access-m6lrs\") pod \"9180c900-b668-4bb3-89b2-8b6018f6de18\" (UID: \"9180c900-b668-4bb3-89b2-8b6018f6de18\") " Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.923276 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-bundle" (OuterVolumeSpecName: "bundle") pod "9180c900-b668-4bb3-89b2-8b6018f6de18" (UID: "9180c900-b668-4bb3-89b2-8b6018f6de18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:00:01 crc kubenswrapper[4957]: I1128 21:00:01.928105 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9180c900-b668-4bb3-89b2-8b6018f6de18-kube-api-access-m6lrs" (OuterVolumeSpecName: "kube-api-access-m6lrs") pod "9180c900-b668-4bb3-89b2-8b6018f6de18" (UID: "9180c900-b668-4bb3-89b2-8b6018f6de18"). InnerVolumeSpecName "kube-api-access-m6lrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.022101 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.022133 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6lrs\" (UniqueName: \"kubernetes.io/projected/9180c900-b668-4bb3-89b2-8b6018f6de18-kube-api-access-m6lrs\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.071486 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-util" (OuterVolumeSpecName: "util") pod "9180c900-b668-4bb3-89b2-8b6018f6de18" (UID: "9180c900-b668-4bb3-89b2-8b6018f6de18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.123146 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9180c900-b668-4bb3-89b2-8b6018f6de18-util\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.557011 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.560961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv" event={"ID":"9180c900-b668-4bb3-89b2-8b6018f6de18","Type":"ContainerDied","Data":"f684c500e422452b51ffb29a32d074f0304ad65164b0b7600dab65861f7dba81"} Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.560995 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f684c500e422452b51ffb29a32d074f0304ad65164b0b7600dab65861f7dba81" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.757125 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.932473 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-config-volume\") pod \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.932584 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqv92\" (UniqueName: \"kubernetes.io/projected/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-kube-api-access-xqv92\") pod \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.932634 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-secret-volume\") pod \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\" (UID: \"c5f7dc1c-d499-496d-a1d6-f6677e8a6865\") " Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.933797 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5f7dc1c-d499-496d-a1d6-f6677e8a6865" (UID: "c5f7dc1c-d499-496d-a1d6-f6677e8a6865"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.935465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5f7dc1c-d499-496d-a1d6-f6677e8a6865" (UID: "c5f7dc1c-d499-496d-a1d6-f6677e8a6865"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:00:02 crc kubenswrapper[4957]: I1128 21:00:02.937945 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-kube-api-access-xqv92" (OuterVolumeSpecName: "kube-api-access-xqv92") pod "c5f7dc1c-d499-496d-a1d6-f6677e8a6865" (UID: "c5f7dc1c-d499-496d-a1d6-f6677e8a6865"). InnerVolumeSpecName "kube-api-access-xqv92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:00:03 crc kubenswrapper[4957]: I1128 21:00:03.034471 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqv92\" (UniqueName: \"kubernetes.io/projected/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-kube-api-access-xqv92\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:03 crc kubenswrapper[4957]: I1128 21:00:03.034513 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:03 crc kubenswrapper[4957]: I1128 21:00:03.034529 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f7dc1c-d499-496d-a1d6-f6677e8a6865-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:03 crc kubenswrapper[4957]: I1128 21:00:03.563518 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" event={"ID":"c5f7dc1c-d499-496d-a1d6-f6677e8a6865","Type":"ContainerDied","Data":"86b66517fc6139a80418091f4a57aa916ddc69b8c629fb63e7cd3f796a180dca"} Nov 28 21:00:03 crc kubenswrapper[4957]: I1128 21:00:03.563563 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b66517fc6139a80418091f4a57aa916ddc69b8c629fb63e7cd3f796a180dca" Nov 28 21:00:03 crc kubenswrapper[4957]: I1128 21:00:03.563581 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb" Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.453476 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qhqwg"] Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454044 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-controller" containerID="cri-o://ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454109 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="nbdb" containerID="cri-o://e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454238 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-acl-logging" containerID="cri-o://01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454231 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="sbdb" containerID="cri-o://20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454163 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454171 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-node" containerID="cri-o://74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.454151 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="northd" containerID="cri-o://0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670" gracePeriod=30 Nov 28 21:00:07 crc kubenswrapper[4957]: I1128 21:00:07.493274 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" containerID="cri-o://97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922" gracePeriod=30 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.592644 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/2.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.593929 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/1.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.593968 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb1978e2-0fff-4af0-b1d4-e21d677ae377" containerID="52728503a6f4233e9416202f6e1e9c303df45a0d3f7d9730d39c1f04dd6919b4" exitCode=2 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.594026 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerDied","Data":"52728503a6f4233e9416202f6e1e9c303df45a0d3f7d9730d39c1f04dd6919b4"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.594059 4957 scope.go:117] "RemoveContainer" containerID="7daf68fa7f05ee2890c848d5237ac48b4c0584698a2eef2c7e83e99404986009" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.594493 4957 scope.go:117] "RemoveContainer" containerID="52728503a6f4233e9416202f6e1e9c303df45a0d3f7d9730d39c1f04dd6919b4" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.594766 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4sml5_openshift-multus(cb1978e2-0fff-4af0-b1d4-e21d677ae377)\"" pod="openshift-multus/multus-4sml5" podUID="cb1978e2-0fff-4af0-b1d4-e21d677ae377" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.596402 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/3.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.597811 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovn-acl-logging/0.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599078 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovn-controller/0.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599360 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922" exitCode=0 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599374 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568" exitCode=0 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599382 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad" exitCode=0 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599389 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670" exitCode=0 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599395 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09" exitCode=0 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599401 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c" exitCode=143 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599407 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890" exitCode=143 Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599423 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599449 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599460 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599470 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599478 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599486 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.599496 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890"} Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.717520 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovnkube-controller/3.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.718804 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovn-acl-logging/0.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.719398 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovn-controller/0.log" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.719454 4957 scope.go:117] "RemoveContainer" containerID="8dad089db8e1181c5032542ef14b8ea75a9b7082f4db9c0f7afc17154a354af2" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.719858 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786047 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-95bpq"] Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786752 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="northd" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786770 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="northd" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786788 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f7dc1c-d499-496d-a1d6-f6677e8a6865" containerName="collect-profiles" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786798 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f7dc1c-d499-496d-a1d6-f6677e8a6865" containerName="collect-profiles" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786811 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786818 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786824 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786830 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786843 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786849 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786860 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-node" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786867 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-node" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786880 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="pull" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786886 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="pull" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786899 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="nbdb" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786905 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="nbdb" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786916 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786924 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786946 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="sbdb" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786951 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="sbdb" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786958 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786964 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786982 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-acl-logging" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.786988 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-acl-logging" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.786999 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kubecfg-setup" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787005 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kubecfg-setup" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.787014 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="util" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787023 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="util" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.787035 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787044 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.787058 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="extract" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787063 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="extract" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787261 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787273 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787281 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="nbdb" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787293 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787306 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787317 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787325 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovn-acl-logging" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787335 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9180c900-b668-4bb3-89b2-8b6018f6de18" containerName="extract" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787343 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="northd" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787353 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f7dc1c-d499-496d-a1d6-f6677e8a6865" containerName="collect-profiles" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787360 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787366 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="sbdb" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.787373 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="kube-rbac-proxy-node" Nov 28 21:00:08 crc kubenswrapper[4957]: E1128 21:00:08.788023 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.788039 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.788256 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerName="ovnkube-controller" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.792886 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.818644 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-ovnkube-script-lib\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.818719 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-systemd-units\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.818825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.818897 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-kubelet\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819339 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb6410b3-2eaf-44db-945c-acaab0911701-ovn-node-metrics-cert\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819379 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-var-lib-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819438 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-ovnkube-config\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819474 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-systemd\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819573 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-node-log\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819607 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-ovn\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819656 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-run-ovn-kubernetes\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819702 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-cni-bin\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819727 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-log-socket\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819748 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-run-netns\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819843 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-env-overrides\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819873 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-etc-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819932 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl455\" (UniqueName: \"kubernetes.io/projected/cb6410b3-2eaf-44db-945c-acaab0911701-kube-api-access-vl455\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.819971 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-cni-netd\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.820008 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-slash\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.920960 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-script-lib\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921014 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/985dfaa6-dc28-434b-9235-b6338e8f331b-ovn-node-metrics-cert\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921034 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-config\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921054 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-netd\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921071 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921089 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-kubelet\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921105 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-ovn\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921125 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-var-lib-openvswitch\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921141 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-etc-openvswitch\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921155 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgqww\" (UniqueName: \"kubernetes.io/projected/985dfaa6-dc28-434b-9235-b6338e8f331b-kube-api-access-hgqww\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921169 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-systemd\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921184 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-env-overrides\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921199 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-openvswitch\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921232 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-bin\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921250 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-log-socket\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921263 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-ovn-kubernetes\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921280 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-netns\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921301 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-node-log\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921314 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-systemd-units\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921327 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-slash\") pod \"985dfaa6-dc28-434b-9235-b6338e8f331b\" (UID: \"985dfaa6-dc28-434b-9235-b6338e8f331b\") " Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921402 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-cni-netd\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921424 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-slash\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921450 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-ovnkube-script-lib\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921440 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921467 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-systemd-units\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921471 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921499 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-kubelet\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921521 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921522 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-kubelet\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921563 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921568 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921579 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921588 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-log-socket" (OuterVolumeSpecName: "log-socket") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921597 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921621 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921635 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921646 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921656 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921667 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921686 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-node-log" (OuterVolumeSpecName: "node-log") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921703 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921721 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-slash" (OuterVolumeSpecName: "host-slash") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921744 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-cni-netd\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921770 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-slash\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-systemd-units\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922477 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-ovnkube-script-lib\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.921622 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb6410b3-2eaf-44db-945c-acaab0911701-ovn-node-metrics-cert\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922523 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-var-lib-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922541 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-ovnkube-config\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-systemd\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-node-log\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922606 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922622 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-ovn\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922618 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-var-lib-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922637 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-run-ovn-kubernetes\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922661 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-run-ovn-kubernetes\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922666 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922685 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-cni-bin\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922713 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-node-log\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922717 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-log-socket\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922724 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-run-netns\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922772 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-cni-bin\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922776 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-systemd\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922749 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-log-socket\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-run-ovn\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922839 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-host-run-netns\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922914 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-env-overrides\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922942 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-etc-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.922968 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl455\" (UniqueName: \"kubernetes.io/projected/cb6410b3-2eaf-44db-945c-acaab0911701-kube-api-access-vl455\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923054 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-ovnkube-config\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923070 4957 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923091 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb6410b3-2eaf-44db-945c-acaab0911701-etc-openvswitch\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923088 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923307 4957 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923319 4957 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923334 4957 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923347 4957 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923358 4957 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923368 4957 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923380 4957 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923390 4957 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/985dfaa6-dc28-434b-9235-b6338e8f331b-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923401 4957 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923411 4957 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923423 4957 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923436 4957 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923449 4957 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923459 4957 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923470 4957 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.923541 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb6410b3-2eaf-44db-945c-acaab0911701-env-overrides\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.926789 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985dfaa6-dc28-434b-9235-b6338e8f331b-kube-api-access-hgqww" (OuterVolumeSpecName: "kube-api-access-hgqww") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "kube-api-access-hgqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.928642 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb6410b3-2eaf-44db-945c-acaab0911701-ovn-node-metrics-cert\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.928876 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985dfaa6-dc28-434b-9235-b6338e8f331b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.943307 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "985dfaa6-dc28-434b-9235-b6338e8f331b" (UID: "985dfaa6-dc28-434b-9235-b6338e8f331b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.943775 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl455\" (UniqueName: \"kubernetes.io/projected/cb6410b3-2eaf-44db-945c-acaab0911701-kube-api-access-vl455\") pod \"ovnkube-node-95bpq\" (UID: \"cb6410b3-2eaf-44db-945c-acaab0911701\") " pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.992195 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.992262 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.992312 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.992816 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c63f94cbbd2cbb566735f5bef0248b92c87139d3c7c4737482b0b18954789c4e"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:00:08 crc kubenswrapper[4957]: I1128 21:00:08.992935 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://c63f94cbbd2cbb566735f5bef0248b92c87139d3c7c4737482b0b18954789c4e" gracePeriod=600 Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.024692 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/985dfaa6-dc28-434b-9235-b6338e8f331b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.024734 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgqww\" (UniqueName: \"kubernetes.io/projected/985dfaa6-dc28-434b-9235-b6338e8f331b-kube-api-access-hgqww\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.024749 4957 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/985dfaa6-dc28-434b-9235-b6338e8f331b-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.122590 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.610042 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovn-acl-logging/0.log" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.610761 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qhqwg_985dfaa6-dc28-434b-9235-b6338e8f331b/ovn-controller/0.log" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.611185 4957 generic.go:334] "Generic (PLEG): container finished" podID="985dfaa6-dc28-434b-9235-b6338e8f331b" containerID="74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd" exitCode=0 Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.611258 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd"} Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.611317 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.611342 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhqwg" event={"ID":"985dfaa6-dc28-434b-9235-b6338e8f331b","Type":"ContainerDied","Data":"de82f165a41e3868882aa968ae2a413dfdf1f586fa008419dfe04b2854f27944"} Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.611368 4957 scope.go:117] "RemoveContainer" containerID="97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.614796 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="c63f94cbbd2cbb566735f5bef0248b92c87139d3c7c4737482b0b18954789c4e" exitCode=0 Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.614904 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"c63f94cbbd2cbb566735f5bef0248b92c87139d3c7c4737482b0b18954789c4e"} Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.614937 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"3bab2bbf40b4116f8715ef0abc775c76378c4f9b0a063bdb948a52f066fba5bb"} Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.621365 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb6410b3-2eaf-44db-945c-acaab0911701" containerID="8efeaee5bedbe07d458d7475704f8a20cf2d0056360a349a410961d923d0f8dd" exitCode=0 Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.621425 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerDied","Data":"8efeaee5bedbe07d458d7475704f8a20cf2d0056360a349a410961d923d0f8dd"} Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.621451 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"2cafd3b11b612c4f25ab5bab944acd4e0077e9b95e2b2acca06116dd15f3a282"} Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.623119 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/2.log" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.629141 4957 scope.go:117] "RemoveContainer" containerID="20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.664436 4957 scope.go:117] "RemoveContainer" containerID="e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.712220 4957 scope.go:117] "RemoveContainer" containerID="0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.730886 4957 scope.go:117] "RemoveContainer" containerID="91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.749630 4957 scope.go:117] "RemoveContainer" containerID="74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.774131 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qhqwg"] Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.777982 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qhqwg"] Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.794370 4957 scope.go:117] "RemoveContainer" containerID="01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.855040 4957 scope.go:117] "RemoveContainer" containerID="ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.892605 4957 scope.go:117] "RemoveContainer" containerID="9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.919980 4957 scope.go:117] "RemoveContainer" containerID="97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.920421 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922\": container with ID starting with 97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922 not found: ID does not exist" containerID="97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.920474 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922"} err="failed to get container status \"97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922\": rpc error: code = NotFound desc = could not find container \"97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922\": container with ID starting with 97627a51ed2803c1ac471f87bef18f04e2b865da9a39b27a0165d81c9807b922 not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.920530 4957 scope.go:117] "RemoveContainer" containerID="20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.920863 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\": container with ID starting with 20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568 not found: ID does not exist" containerID="20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.920896 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568"} err="failed to get container status \"20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\": rpc error: code = NotFound desc = could not find container \"20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568\": container with ID starting with 20c98f91d9a52b342dcd3b465191ec0d3c2afbc87bd5f6ad42ba2babaafb9568 not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.920917 4957 scope.go:117] "RemoveContainer" containerID="e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.921156 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\": container with ID starting with e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad not found: ID does not exist" containerID="e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.921183 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad"} err="failed to get container status \"e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\": rpc error: code = NotFound desc = could not find container \"e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad\": container with ID starting with e4edad02618644faabf4f7cd9d55becbf990376d483f1e598da191e658097aad not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.921198 4957 scope.go:117] "RemoveContainer" containerID="0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.921557 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\": container with ID starting with 0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670 not found: ID does not exist" containerID="0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.921666 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670"} err="failed to get container status \"0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\": rpc error: code = NotFound desc = could not find container \"0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670\": container with ID starting with 0850f6c14e78df514a23d6f9d6ed9a33905c4d9b1e6313ad74ec0c6e44383670 not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.921764 4957 scope.go:117] "RemoveContainer" containerID="91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.922244 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\": container with ID starting with 91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09 not found: ID does not exist" containerID="91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.922271 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09"} err="failed to get container status \"91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\": rpc error: code = NotFound desc = could not find container \"91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09\": container with ID starting with 91f8a21ec21e1b4ee035f2b50d3c95b87a71db694776a10e944c04a94b88fe09 not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.922292 4957 scope.go:117] "RemoveContainer" containerID="74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.922616 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\": container with ID starting with 74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd not found: ID does not exist" containerID="74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.922726 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd"} err="failed to get container status \"74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\": rpc error: code = NotFound desc = could not find container \"74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd\": container with ID starting with 74f929fd715e95d2ab4ef1eb41d7ef9d98165f047a95581e3b43fd8c861471bd not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.922866 4957 scope.go:117] "RemoveContainer" containerID="01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.923222 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\": container with ID starting with 01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c not found: ID does not exist" containerID="01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.923249 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c"} err="failed to get container status \"01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\": rpc error: code = NotFound desc = could not find container \"01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c\": container with ID starting with 01c1416a8bf7a22e5dcfa831abee392889a07298e434ffc6731417032bc8a27c not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.923265 4957 scope.go:117] "RemoveContainer" containerID="ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.923861 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\": container with ID starting with ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890 not found: ID does not exist" containerID="ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.923886 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890"} err="failed to get container status \"ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\": rpc error: code = NotFound desc = could not find container \"ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890\": container with ID starting with ddba1c33aa7329d1a29badeac3562dc7eae4f9990be1c0cd5a226db12cea9890 not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.923906 4957 scope.go:117] "RemoveContainer" containerID="9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352" Nov 28 21:00:09 crc kubenswrapper[4957]: E1128 21:00:09.924193 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\": container with ID starting with 9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352 not found: ID does not exist" containerID="9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.924235 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352"} err="failed to get container status \"9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\": rpc error: code = NotFound desc = could not find container \"9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352\": container with ID starting with 9bb227fa71f557fe01833c4fd10c1b5337c7722dbc5b5a31aa50b97a412ef352 not found: ID does not exist" Nov 28 21:00:09 crc kubenswrapper[4957]: I1128 21:00:09.924252 4957 scope.go:117] "RemoveContainer" containerID="49d72eb6f95332880907d7163c6aa8e342e2f384f389a618e0900e3a1f6ad954" Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.635504 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"418d77bc15dcff72554463768e91427fbe7672015d19a0c8a926cdff3be063c4"} Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.635752 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"37a746d2286cd69d520b3079c9d11dce51154399ba80195aabea3a82ece7488a"} Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.635763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"844766daec183c387fb0a908a0575796cce872b24c770236b7789773d15926a5"} Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.635792 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"c72920c64f8f9f6039178e597e2308cc1accb4ff0103a8abb3a2d01afe40773e"} Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.635800 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"5fcbf9bd586c0cf85899dcbf07fdeaf9ccaae39fd5d5c0941c874293f4cb2be1"} Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.635808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"a18ea18ec775d3caa7f9475876e54bba4927baed56899309d6a03d6c58eff91c"} Nov 28 21:00:10 crc kubenswrapper[4957]: I1128 21:00:10.821426 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985dfaa6-dc28-434b-9235-b6338e8f331b" path="/var/lib/kubelet/pods/985dfaa6-dc28-434b-9235-b6338e8f331b/volumes" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.306856 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm"] Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.308011 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.312672 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.312951 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mzdlh" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.312788 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.402871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrcw\" (UniqueName: \"kubernetes.io/projected/cda43354-6472-4023-914d-dde633218f08-kube-api-access-pbrcw\") pod \"obo-prometheus-operator-668cf9dfbb-v86zm\" (UID: \"cda43354-6472-4023-914d-dde633218f08\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.425608 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d"] Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.426327 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.428826 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-jrsnf" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.429036 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.440378 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8"] Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.441171 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.504353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrcw\" (UniqueName: \"kubernetes.io/projected/cda43354-6472-4023-914d-dde633218f08-kube-api-access-pbrcw\") pod \"obo-prometheus-operator-668cf9dfbb-v86zm\" (UID: \"cda43354-6472-4023-914d-dde633218f08\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.506437 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6439437b-8d36-450e-87e0-9b394b0aa987-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d\" (UID: \"6439437b-8d36-450e-87e0-9b394b0aa987\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.506653 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a78ee796-8b40-4db0-9834-a4d66c77f95a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8\" (UID: \"a78ee796-8b40-4db0-9834-a4d66c77f95a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.506802 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a78ee796-8b40-4db0-9834-a4d66c77f95a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8\" (UID: \"a78ee796-8b40-4db0-9834-a4d66c77f95a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.506942 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6439437b-8d36-450e-87e0-9b394b0aa987-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d\" (UID: \"6439437b-8d36-450e-87e0-9b394b0aa987\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.528757 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrcw\" (UniqueName: \"kubernetes.io/projected/cda43354-6472-4023-914d-dde633218f08-kube-api-access-pbrcw\") pod \"obo-prometheus-operator-668cf9dfbb-v86zm\" (UID: \"cda43354-6472-4023-914d-dde633218f08\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.543545 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sbrrc"] Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.544449 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.546073 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-v4968" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.546634 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.608304 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6439437b-8d36-450e-87e0-9b394b0aa987-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d\" (UID: \"6439437b-8d36-450e-87e0-9b394b0aa987\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.608542 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sbrrc\" (UID: \"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b\") " pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.608653 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frwv\" (UniqueName: \"kubernetes.io/projected/d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b-kube-api-access-9frwv\") pod \"observability-operator-d8bb48f5d-sbrrc\" (UID: \"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b\") " pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.608737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6439437b-8d36-450e-87e0-9b394b0aa987-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d\" (UID: \"6439437b-8d36-450e-87e0-9b394b0aa987\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.608817 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a78ee796-8b40-4db0-9834-a4d66c77f95a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8\" (UID: \"a78ee796-8b40-4db0-9834-a4d66c77f95a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.608902 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a78ee796-8b40-4db0-9834-a4d66c77f95a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8\" (UID: \"a78ee796-8b40-4db0-9834-a4d66c77f95a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.611955 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6439437b-8d36-450e-87e0-9b394b0aa987-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d\" (UID: \"6439437b-8d36-450e-87e0-9b394b0aa987\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.611973 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6439437b-8d36-450e-87e0-9b394b0aa987-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d\" (UID: \"6439437b-8d36-450e-87e0-9b394b0aa987\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.612664 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a78ee796-8b40-4db0-9834-a4d66c77f95a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8\" (UID: \"a78ee796-8b40-4db0-9834-a4d66c77f95a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.613167 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a78ee796-8b40-4db0-9834-a4d66c77f95a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8\" (UID: \"a78ee796-8b40-4db0-9834-a4d66c77f95a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.623170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.626546 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-jbfht"] Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.627348 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.630223 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-n6k9r" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.656903 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(459a04cac85245dbfb7cb877b6a31199a895d6c410a016fde92c03d1d8f018c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.656970 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(459a04cac85245dbfb7cb877b6a31199a895d6c410a016fde92c03d1d8f018c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.656991 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(459a04cac85245dbfb7cb877b6a31199a895d6c410a016fde92c03d1d8f018c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.657030 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators(cda43354-6472-4023-914d-dde633218f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators(cda43354-6472-4023-914d-dde633218f08)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(459a04cac85245dbfb7cb877b6a31199a895d6c410a016fde92c03d1d8f018c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" podUID="cda43354-6472-4023-914d-dde633218f08" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.660833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"0e8e09d482722281de911ab4411bb8cec504f9433f7c1a959413d3525699fae1"} Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.709943 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frwv\" (UniqueName: \"kubernetes.io/projected/d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b-kube-api-access-9frwv\") pod \"observability-operator-d8bb48f5d-sbrrc\" (UID: \"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b\") " pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.710176 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5722\" (UniqueName: \"kubernetes.io/projected/2841a3ed-5cfc-4a7b-a2bd-a3536018850f-kube-api-access-l5722\") pod \"perses-operator-5446b9c989-jbfht\" (UID: \"2841a3ed-5cfc-4a7b-a2bd-a3536018850f\") " pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.710313 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2841a3ed-5cfc-4a7b-a2bd-a3536018850f-openshift-service-ca\") pod \"perses-operator-5446b9c989-jbfht\" (UID: \"2841a3ed-5cfc-4a7b-a2bd-a3536018850f\") " pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.710439 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sbrrc\" (UID: \"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b\") " pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.715131 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sbrrc\" (UID: \"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b\") " pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.731484 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frwv\" (UniqueName: \"kubernetes.io/projected/d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b-kube-api-access-9frwv\") pod \"observability-operator-d8bb48f5d-sbrrc\" (UID: \"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b\") " pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.746584 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.756043 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.799225 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(7a760f7321187517990a3ddb9679eabe43d01ab8cffd6bc7d5aa8ccba18e7e99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.799309 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(7a760f7321187517990a3ddb9679eabe43d01ab8cffd6bc7d5aa8ccba18e7e99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.799339 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(7a760f7321187517990a3ddb9679eabe43d01ab8cffd6bc7d5aa8ccba18e7e99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.799401 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators(6439437b-8d36-450e-87e0-9b394b0aa987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators(6439437b-8d36-450e-87e0-9b394b0aa987)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(7a760f7321187517990a3ddb9679eabe43d01ab8cffd6bc7d5aa8ccba18e7e99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" podUID="6439437b-8d36-450e-87e0-9b394b0aa987" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.806478 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(7895354fb48090dbfe8ba0ce2d5fa2051c03b9f7bad72d293b1e575ff718e1f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.806552 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(7895354fb48090dbfe8ba0ce2d5fa2051c03b9f7bad72d293b1e575ff718e1f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.806575 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(7895354fb48090dbfe8ba0ce2d5fa2051c03b9f7bad72d293b1e575ff718e1f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.806630 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators(a78ee796-8b40-4db0-9834-a4d66c77f95a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators(a78ee796-8b40-4db0-9834-a4d66c77f95a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(7895354fb48090dbfe8ba0ce2d5fa2051c03b9f7bad72d293b1e575ff718e1f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" podUID="a78ee796-8b40-4db0-9834-a4d66c77f95a" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.817464 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2841a3ed-5cfc-4a7b-a2bd-a3536018850f-openshift-service-ca\") pod \"perses-operator-5446b9c989-jbfht\" (UID: \"2841a3ed-5cfc-4a7b-a2bd-a3536018850f\") " pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.817575 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5722\" (UniqueName: \"kubernetes.io/projected/2841a3ed-5cfc-4a7b-a2bd-a3536018850f-kube-api-access-l5722\") pod \"perses-operator-5446b9c989-jbfht\" (UID: \"2841a3ed-5cfc-4a7b-a2bd-a3536018850f\") " pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.818480 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2841a3ed-5cfc-4a7b-a2bd-a3536018850f-openshift-service-ca\") pod \"perses-operator-5446b9c989-jbfht\" (UID: \"2841a3ed-5cfc-4a7b-a2bd-a3536018850f\") " pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.836695 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5722\" (UniqueName: \"kubernetes.io/projected/2841a3ed-5cfc-4a7b-a2bd-a3536018850f-kube-api-access-l5722\") pod \"perses-operator-5446b9c989-jbfht\" (UID: \"2841a3ed-5cfc-4a7b-a2bd-a3536018850f\") " pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.869034 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.894183 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(c09ff2fe74239c9ed8017c6ec953befabf2c6b74e70b5fbd5a1e1abe50feb5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.894305 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(c09ff2fe74239c9ed8017c6ec953befabf2c6b74e70b5fbd5a1e1abe50feb5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.894331 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(c09ff2fe74239c9ed8017c6ec953befabf2c6b74e70b5fbd5a1e1abe50feb5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:13 crc kubenswrapper[4957]: E1128 21:00:13.894384 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-sbrrc_openshift-operators(d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-sbrrc_openshift-operators(d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(c09ff2fe74239c9ed8017c6ec953befabf2c6b74e70b5fbd5a1e1abe50feb5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" podUID="d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b" Nov 28 21:00:13 crc kubenswrapper[4957]: I1128 21:00:13.997776 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:14 crc kubenswrapper[4957]: E1128 21:00:14.029963 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(5ee383dec8a858c37358cde0d424cad4233d892e29ace3ce07066505658c50d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:14 crc kubenswrapper[4957]: E1128 21:00:14.030061 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(5ee383dec8a858c37358cde0d424cad4233d892e29ace3ce07066505658c50d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:14 crc kubenswrapper[4957]: E1128 21:00:14.030091 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(5ee383dec8a858c37358cde0d424cad4233d892e29ace3ce07066505658c50d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:14 crc kubenswrapper[4957]: E1128 21:00:14.030147 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-jbfht_openshift-operators(2841a3ed-5cfc-4a7b-a2bd-a3536018850f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-jbfht_openshift-operators(2841a3ed-5cfc-4a7b-a2bd-a3536018850f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(5ee383dec8a858c37358cde0d424cad4233d892e29ace3ce07066505658c50d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-jbfht" podUID="2841a3ed-5cfc-4a7b-a2bd-a3536018850f" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.677730 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" event={"ID":"cb6410b3-2eaf-44db-945c-acaab0911701","Type":"ContainerStarted","Data":"3b5d3d7494bd9ba0fc9a43f0ba453c1ce95deed5067c1673b1427f42dc5d8a91"} Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.678137 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.678155 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.678167 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.709831 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" podStartSLOduration=7.709804587 podStartE2EDuration="7.709804587s" podCreationTimestamp="2025-11-28 21:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:00:15.706092235 +0000 UTC m=+655.174740164" watchObservedRunningTime="2025-11-28 21:00:15.709804587 +0000 UTC m=+655.178452496" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.715927 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.721417 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.877752 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm"] Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.877855 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.878332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.886589 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d"] Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.886710 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.887178 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.888818 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sbrrc"] Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.888932 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.889499 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.905540 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8"] Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.905752 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.906370 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.953473 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(f1ef10eb7a2b8f7113ea70d5b8034aad8740b89e974eebabacc7687e4880a349): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.953540 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(f1ef10eb7a2b8f7113ea70d5b8034aad8740b89e974eebabacc7687e4880a349): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.953560 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(f1ef10eb7a2b8f7113ea70d5b8034aad8740b89e974eebabacc7687e4880a349): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.953607 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators(cda43354-6472-4023-914d-dde633218f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators(cda43354-6472-4023-914d-dde633218f08)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(f1ef10eb7a2b8f7113ea70d5b8034aad8740b89e974eebabacc7687e4880a349): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" podUID="cda43354-6472-4023-914d-dde633218f08" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.981137 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(2385c6c9e15e11ddc9d9adde1f65655949ebd43dc1c6ed4baea90bf9a9590447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.981217 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(2385c6c9e15e11ddc9d9adde1f65655949ebd43dc1c6ed4baea90bf9a9590447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.981246 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(2385c6c9e15e11ddc9d9adde1f65655949ebd43dc1c6ed4baea90bf9a9590447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.981297 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators(6439437b-8d36-450e-87e0-9b394b0aa987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators(6439437b-8d36-450e-87e0-9b394b0aa987)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(2385c6c9e15e11ddc9d9adde1f65655949ebd43dc1c6ed4baea90bf9a9590447): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" podUID="6439437b-8d36-450e-87e0-9b394b0aa987" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.988699 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-jbfht"] Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.988792 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:15 crc kubenswrapper[4957]: I1128 21:00:15.989183 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.989723 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(39383f0fb099b587f083ddf9c99b4a6c08361b49fc73d0e55a371325a7626a6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.989790 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(39383f0fb099b587f083ddf9c99b4a6c08361b49fc73d0e55a371325a7626a6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.989815 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(39383f0fb099b587f083ddf9c99b4a6c08361b49fc73d0e55a371325a7626a6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:15 crc kubenswrapper[4957]: E1128 21:00:15.989858 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators(a78ee796-8b40-4db0-9834-a4d66c77f95a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators(a78ee796-8b40-4db0-9834-a4d66c77f95a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(39383f0fb099b587f083ddf9c99b4a6c08361b49fc73d0e55a371325a7626a6e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" podUID="a78ee796-8b40-4db0-9834-a4d66c77f95a" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.007876 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4533b8126ecc4fc95491569af2da0853576c244024e453c6ebca02f2364074b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.007960 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4533b8126ecc4fc95491569af2da0853576c244024e453c6ebca02f2364074b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.007993 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4533b8126ecc4fc95491569af2da0853576c244024e453c6ebca02f2364074b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.008054 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-sbrrc_openshift-operators(d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-sbrrc_openshift-operators(d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4533b8126ecc4fc95491569af2da0853576c244024e453c6ebca02f2364074b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" podUID="d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.012880 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(3b82f37e4b7febb3157a4ffdf3b18c48e7199fce691912cb4da226301f0cec5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.012927 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(3b82f37e4b7febb3157a4ffdf3b18c48e7199fce691912cb4da226301f0cec5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.012947 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(3b82f37e4b7febb3157a4ffdf3b18c48e7199fce691912cb4da226301f0cec5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:16 crc kubenswrapper[4957]: E1128 21:00:16.012999 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-jbfht_openshift-operators(2841a3ed-5cfc-4a7b-a2bd-a3536018850f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-jbfht_openshift-operators(2841a3ed-5cfc-4a7b-a2bd-a3536018850f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(3b82f37e4b7febb3157a4ffdf3b18c48e7199fce691912cb4da226301f0cec5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-jbfht" podUID="2841a3ed-5cfc-4a7b-a2bd-a3536018850f" Nov 28 21:00:20 crc kubenswrapper[4957]: I1128 21:00:20.818106 4957 scope.go:117] "RemoveContainer" containerID="52728503a6f4233e9416202f6e1e9c303df45a0d3f7d9730d39c1f04dd6919b4" Nov 28 21:00:20 crc kubenswrapper[4957]: E1128 21:00:20.818676 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4sml5_openshift-multus(cb1978e2-0fff-4af0-b1d4-e21d677ae377)\"" pod="openshift-multus/multus-4sml5" podUID="cb1978e2-0fff-4af0-b1d4-e21d677ae377" Nov 28 21:00:26 crc kubenswrapper[4957]: I1128 21:00:26.812900 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:26 crc kubenswrapper[4957]: I1128 21:00:26.812907 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:26 crc kubenswrapper[4957]: I1128 21:00:26.813978 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:26 crc kubenswrapper[4957]: I1128 21:00:26.813994 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.870729 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(8d18079898f39eef7aaa032abb46b6d646a35595dc2c69d0c36c2ba7ed2eaa2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.870806 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(8d18079898f39eef7aaa032abb46b6d646a35595dc2c69d0c36c2ba7ed2eaa2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.870835 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(8d18079898f39eef7aaa032abb46b6d646a35595dc2c69d0c36c2ba7ed2eaa2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.870889 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators(a78ee796-8b40-4db0-9834-a4d66c77f95a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators(a78ee796-8b40-4db0-9834-a4d66c77f95a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_openshift-operators_a78ee796-8b40-4db0-9834-a4d66c77f95a_0(8d18079898f39eef7aaa032abb46b6d646a35595dc2c69d0c36c2ba7ed2eaa2f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" podUID="a78ee796-8b40-4db0-9834-a4d66c77f95a" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.884537 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(f627a9ec3f4a9e98df849c519a8cc80621847b438271e67e1b6a6c4c3a249d1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.884610 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(f627a9ec3f4a9e98df849c519a8cc80621847b438271e67e1b6a6c4c3a249d1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.884639 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(f627a9ec3f4a9e98df849c519a8cc80621847b438271e67e1b6a6c4c3a249d1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:26 crc kubenswrapper[4957]: E1128 21:00:26.884696 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-jbfht_openshift-operators(2841a3ed-5cfc-4a7b-a2bd-a3536018850f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-jbfht_openshift-operators(2841a3ed-5cfc-4a7b-a2bd-a3536018850f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-jbfht_openshift-operators_2841a3ed-5cfc-4a7b-a2bd-a3536018850f_0(f627a9ec3f4a9e98df849c519a8cc80621847b438271e67e1b6a6c4c3a249d1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-jbfht" podUID="2841a3ed-5cfc-4a7b-a2bd-a3536018850f" Nov 28 21:00:28 crc kubenswrapper[4957]: I1128 21:00:28.812413 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:28 crc kubenswrapper[4957]: I1128 21:00:28.813178 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:28 crc kubenswrapper[4957]: E1128 21:00:28.846179 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(73de0ae4533fe37fdae5b45648a7bf751e50a2e37e5f4fe12c821bab75e1579e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:28 crc kubenswrapper[4957]: E1128 21:00:28.846522 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(73de0ae4533fe37fdae5b45648a7bf751e50a2e37e5f4fe12c821bab75e1579e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:28 crc kubenswrapper[4957]: E1128 21:00:28.846548 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(73de0ae4533fe37fdae5b45648a7bf751e50a2e37e5f4fe12c821bab75e1579e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:28 crc kubenswrapper[4957]: E1128 21:00:28.846599 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators(6439437b-8d36-450e-87e0-9b394b0aa987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators(6439437b-8d36-450e-87e0-9b394b0aa987)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_openshift-operators_6439437b-8d36-450e-87e0-9b394b0aa987_0(73de0ae4533fe37fdae5b45648a7bf751e50a2e37e5f4fe12c821bab75e1579e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" podUID="6439437b-8d36-450e-87e0-9b394b0aa987" Nov 28 21:00:30 crc kubenswrapper[4957]: I1128 21:00:30.812657 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:30 crc kubenswrapper[4957]: I1128 21:00:30.818432 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:30 crc kubenswrapper[4957]: E1128 21:00:30.846193 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(5a75734cc1bcc5e0646bc753f83798327405e41bd65eae2cb0e4ee161b178ccf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:30 crc kubenswrapper[4957]: E1128 21:00:30.846637 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(5a75734cc1bcc5e0646bc753f83798327405e41bd65eae2cb0e4ee161b178ccf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:30 crc kubenswrapper[4957]: E1128 21:00:30.846711 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(5a75734cc1bcc5e0646bc753f83798327405e41bd65eae2cb0e4ee161b178ccf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:30 crc kubenswrapper[4957]: E1128 21:00:30.846817 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators(cda43354-6472-4023-914d-dde633218f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators(cda43354-6472-4023-914d-dde633218f08)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-v86zm_openshift-operators_cda43354-6472-4023-914d-dde633218f08_0(5a75734cc1bcc5e0646bc753f83798327405e41bd65eae2cb0e4ee161b178ccf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" podUID="cda43354-6472-4023-914d-dde633218f08" Nov 28 21:00:31 crc kubenswrapper[4957]: I1128 21:00:31.812984 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:31 crc kubenswrapper[4957]: I1128 21:00:31.813242 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:31 crc kubenswrapper[4957]: E1128 21:00:31.840424 4957 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4e2476e5f49de4009354c423f54bb75106079adb61a98834d1addc2829de311b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 21:00:31 crc kubenswrapper[4957]: E1128 21:00:31.840497 4957 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4e2476e5f49de4009354c423f54bb75106079adb61a98834d1addc2829de311b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:31 crc kubenswrapper[4957]: E1128 21:00:31.840522 4957 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4e2476e5f49de4009354c423f54bb75106079adb61a98834d1addc2829de311b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:31 crc kubenswrapper[4957]: E1128 21:00:31.840575 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-sbrrc_openshift-operators(d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-sbrrc_openshift-operators(d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-sbrrc_openshift-operators_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b_0(4e2476e5f49de4009354c423f54bb75106079adb61a98834d1addc2829de311b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" podUID="d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b" Nov 28 21:00:32 crc kubenswrapper[4957]: I1128 21:00:32.812878 4957 scope.go:117] "RemoveContainer" containerID="52728503a6f4233e9416202f6e1e9c303df45a0d3f7d9730d39c1f04dd6919b4" Nov 28 21:00:33 crc kubenswrapper[4957]: I1128 21:00:33.787697 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4sml5_cb1978e2-0fff-4af0-b1d4-e21d677ae377/kube-multus/2.log" Nov 28 21:00:33 crc kubenswrapper[4957]: I1128 21:00:33.787987 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4sml5" event={"ID":"cb1978e2-0fff-4af0-b1d4-e21d677ae377","Type":"ContainerStarted","Data":"33ac7179185607b273aa777ef8872abcdedd918fe5d1ce4c9b0f9ccd2fd4ba9a"} Nov 28 21:00:38 crc kubenswrapper[4957]: I1128 21:00:38.813127 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:38 crc kubenswrapper[4957]: I1128 21:00:38.813922 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" Nov 28 21:00:39 crc kubenswrapper[4957]: I1128 21:00:39.107106 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8"] Nov 28 21:00:39 crc kubenswrapper[4957]: I1128 21:00:39.153660 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-95bpq" Nov 28 21:00:39 crc kubenswrapper[4957]: I1128 21:00:39.812294 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:39 crc kubenswrapper[4957]: I1128 21:00:39.812860 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:39 crc kubenswrapper[4957]: I1128 21:00:39.821282 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" event={"ID":"a78ee796-8b40-4db0-9834-a4d66c77f95a","Type":"ContainerStarted","Data":"19de1d2ced318978b2d01543a25bdf4cde33355bd2d5b9e39ad2eecf47b7980f"} Nov 28 21:00:40 crc kubenswrapper[4957]: I1128 21:00:40.257330 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-jbfht"] Nov 28 21:00:40 crc kubenswrapper[4957]: I1128 21:00:40.831175 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-jbfht" event={"ID":"2841a3ed-5cfc-4a7b-a2bd-a3536018850f","Type":"ContainerStarted","Data":"9e97e94eae5f04dcd648dbcd2eb68545d0798cd6f4d55e5191244d7e358ea269"} Nov 28 21:00:41 crc kubenswrapper[4957]: I1128 21:00:41.811918 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:41 crc kubenswrapper[4957]: I1128 21:00:41.812482 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" Nov 28 21:00:42 crc kubenswrapper[4957]: I1128 21:00:42.199208 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm"] Nov 28 21:00:42 crc kubenswrapper[4957]: W1128 21:00:42.208739 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcda43354_6472_4023_914d_dde633218f08.slice/crio-8ce968bcb7e723b664278d23a9b06fed7bb6d0196cef24cd26df0805c0a2ac2e WatchSource:0}: Error finding container 8ce968bcb7e723b664278d23a9b06fed7bb6d0196cef24cd26df0805c0a2ac2e: Status 404 returned error can't find the container with id 8ce968bcb7e723b664278d23a9b06fed7bb6d0196cef24cd26df0805c0a2ac2e Nov 28 21:00:42 crc kubenswrapper[4957]: I1128 21:00:42.812633 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:42 crc kubenswrapper[4957]: I1128 21:00:42.813464 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:42 crc kubenswrapper[4957]: I1128 21:00:42.840176 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" event={"ID":"cda43354-6472-4023-914d-dde633218f08","Type":"ContainerStarted","Data":"8ce968bcb7e723b664278d23a9b06fed7bb6d0196cef24cd26df0805c0a2ac2e"} Nov 28 21:00:43 crc kubenswrapper[4957]: I1128 21:00:43.216424 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sbrrc"] Nov 28 21:00:43 crc kubenswrapper[4957]: I1128 21:00:43.814059 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:43 crc kubenswrapper[4957]: I1128 21:00:43.814663 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" Nov 28 21:00:43 crc kubenswrapper[4957]: I1128 21:00:43.849552 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" event={"ID":"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b","Type":"ContainerStarted","Data":"ea2d94d3f483bbf7cae2f53a280698a5ad8bf4e5feba4f70aab6e87570c76f84"} Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.745313 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d"] Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.869025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" event={"ID":"6439437b-8d36-450e-87e0-9b394b0aa987","Type":"ContainerStarted","Data":"f2825d4ec45171d53e3090a22a558d423a3e358f57fb8f95ac7614624f1967f8"} Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.871882 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" event={"ID":"a78ee796-8b40-4db0-9834-a4d66c77f95a","Type":"ContainerStarted","Data":"6a424e08ab0dd6b717b8d194ab67c7ff9355a3563a4af68420280e95f3561da4"} Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.875921 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-jbfht" event={"ID":"2841a3ed-5cfc-4a7b-a2bd-a3536018850f","Type":"ContainerStarted","Data":"338b824ae85bf840dabbd5d6754d4a7e14df49ad6e657c7bd388c7a65be44fd9"} Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.876096 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.890904 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8" podStartSLOduration=26.446523885 podStartE2EDuration="33.890880447s" podCreationTimestamp="2025-11-28 21:00:13 +0000 UTC" firstStartedPulling="2025-11-28 21:00:39.115284326 +0000 UTC m=+678.583932235" lastFinishedPulling="2025-11-28 21:00:46.559640888 +0000 UTC m=+686.028288797" observedRunningTime="2025-11-28 21:00:46.884581331 +0000 UTC m=+686.353229230" watchObservedRunningTime="2025-11-28 21:00:46.890880447 +0000 UTC m=+686.359528346" Nov 28 21:00:46 crc kubenswrapper[4957]: I1128 21:00:46.931841 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-jbfht" podStartSLOduration=27.64205943 podStartE2EDuration="33.931825313s" podCreationTimestamp="2025-11-28 21:00:13 +0000 UTC" firstStartedPulling="2025-11-28 21:00:40.269826144 +0000 UTC m=+679.738474053" lastFinishedPulling="2025-11-28 21:00:46.559592027 +0000 UTC m=+686.028239936" observedRunningTime="2025-11-28 21:00:46.925345212 +0000 UTC m=+686.393993121" watchObservedRunningTime="2025-11-28 21:00:46.931825313 +0000 UTC m=+686.400473222" Nov 28 21:00:47 crc kubenswrapper[4957]: I1128 21:00:47.883707 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" event={"ID":"6439437b-8d36-450e-87e0-9b394b0aa987","Type":"ContainerStarted","Data":"53faa27b71d7b755ff0e6082ee7fb778de5469acc5b527cd58a4ae536e73231d"} Nov 28 21:00:47 crc kubenswrapper[4957]: I1128 21:00:47.901843 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d" podStartSLOduration=34.901826733 podStartE2EDuration="34.901826733s" podCreationTimestamp="2025-11-28 21:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:00:47.899520346 +0000 UTC m=+687.368168255" watchObservedRunningTime="2025-11-28 21:00:47.901826733 +0000 UTC m=+687.370474642" Nov 28 21:00:48 crc kubenswrapper[4957]: I1128 21:00:48.891607 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" event={"ID":"cda43354-6472-4023-914d-dde633218f08","Type":"ContainerStarted","Data":"722f8d8a3715d2d2cb5044f2dbdfb8f27db25cf9e3e2ad6e75ca4bdddad99edb"} Nov 28 21:00:48 crc kubenswrapper[4957]: I1128 21:00:48.916553 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-v86zm" podStartSLOduration=30.394353917 podStartE2EDuration="35.916534002s" podCreationTimestamp="2025-11-28 21:00:13 +0000 UTC" firstStartedPulling="2025-11-28 21:00:42.212302355 +0000 UTC m=+681.680950264" lastFinishedPulling="2025-11-28 21:00:47.73448244 +0000 UTC m=+687.203130349" observedRunningTime="2025-11-28 21:00:48.911293362 +0000 UTC m=+688.379941281" watchObservedRunningTime="2025-11-28 21:00:48.916534002 +0000 UTC m=+688.385181911" Nov 28 21:00:50 crc kubenswrapper[4957]: I1128 21:00:50.903238 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" event={"ID":"d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b","Type":"ContainerStarted","Data":"d122dbe6ce19e8c4c1b2149e68d1574296ccf49aaec9748ede2b9ce64d669fbc"} Nov 28 21:00:50 crc kubenswrapper[4957]: I1128 21:00:50.903735 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:50 crc kubenswrapper[4957]: I1128 21:00:50.905154 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" Nov 28 21:00:50 crc kubenswrapper[4957]: I1128 21:00:50.931174 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-sbrrc" podStartSLOduration=30.9144432 podStartE2EDuration="37.931152941s" podCreationTimestamp="2025-11-28 21:00:13 +0000 UTC" firstStartedPulling="2025-11-28 21:00:43.235551945 +0000 UTC m=+682.704199854" lastFinishedPulling="2025-11-28 21:00:50.252261686 +0000 UTC m=+689.720909595" observedRunningTime="2025-11-28 21:00:50.924023095 +0000 UTC m=+690.392671004" watchObservedRunningTime="2025-11-28 21:00:50.931152941 +0000 UTC m=+690.399800850" Nov 28 21:00:54 crc kubenswrapper[4957]: I1128 21:00:54.000403 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-jbfht" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.713163 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8kfk7"] Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.714514 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.718315 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-88wlc"] Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.718711 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.718821 4957 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dr8r8" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.719010 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-88wlc" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.721165 4957 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tfwvd" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.723260 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8kfk7"] Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.728783 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.734663 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-88wlc"] Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.753300 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-gjw6l"] Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.753998 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.758043 4957 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8xz4c" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.769979 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkw4\" (UniqueName: \"kubernetes.io/projected/bd388a87-03fe-4f7f-b36a-a89a8d110806-kube-api-access-nrkw4\") pod \"cert-manager-webhook-5655c58dd6-gjw6l\" (UID: \"bd388a87-03fe-4f7f-b36a-a89a8d110806\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.770028 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvzfg\" (UniqueName: \"kubernetes.io/projected/228af258-a007-4de0-922b-f434bc1e665b-kube-api-access-jvzfg\") pod \"cert-manager-cainjector-7f985d654d-8kfk7\" (UID: \"228af258-a007-4de0-922b-f434bc1e665b\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.770127 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlnpk\" (UniqueName: \"kubernetes.io/projected/e13a6b33-f471-46db-b7f2-98600799eaef-kube-api-access-xlnpk\") pod \"cert-manager-5b446d88c5-88wlc\" (UID: \"e13a6b33-f471-46db-b7f2-98600799eaef\") " pod="cert-manager/cert-manager-5b446d88c5-88wlc" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.786635 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-gjw6l"] Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.871430 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlnpk\" (UniqueName: \"kubernetes.io/projected/e13a6b33-f471-46db-b7f2-98600799eaef-kube-api-access-xlnpk\") pod \"cert-manager-5b446d88c5-88wlc\" (UID: \"e13a6b33-f471-46db-b7f2-98600799eaef\") " pod="cert-manager/cert-manager-5b446d88c5-88wlc" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.871565 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrkw4\" (UniqueName: \"kubernetes.io/projected/bd388a87-03fe-4f7f-b36a-a89a8d110806-kube-api-access-nrkw4\") pod \"cert-manager-webhook-5655c58dd6-gjw6l\" (UID: \"bd388a87-03fe-4f7f-b36a-a89a8d110806\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.871598 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvzfg\" (UniqueName: \"kubernetes.io/projected/228af258-a007-4de0-922b-f434bc1e665b-kube-api-access-jvzfg\") pod \"cert-manager-cainjector-7f985d654d-8kfk7\" (UID: \"228af258-a007-4de0-922b-f434bc1e665b\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.893181 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvzfg\" (UniqueName: \"kubernetes.io/projected/228af258-a007-4de0-922b-f434bc1e665b-kube-api-access-jvzfg\") pod \"cert-manager-cainjector-7f985d654d-8kfk7\" (UID: \"228af258-a007-4de0-922b-f434bc1e665b\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.893232 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrkw4\" (UniqueName: \"kubernetes.io/projected/bd388a87-03fe-4f7f-b36a-a89a8d110806-kube-api-access-nrkw4\") pod \"cert-manager-webhook-5655c58dd6-gjw6l\" (UID: \"bd388a87-03fe-4f7f-b36a-a89a8d110806\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:00:59 crc kubenswrapper[4957]: I1128 21:00:59.893184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlnpk\" (UniqueName: \"kubernetes.io/projected/e13a6b33-f471-46db-b7f2-98600799eaef-kube-api-access-xlnpk\") pod \"cert-manager-5b446d88c5-88wlc\" (UID: \"e13a6b33-f471-46db-b7f2-98600799eaef\") " pod="cert-manager/cert-manager-5b446d88c5-88wlc" Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.035795 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.047138 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-88wlc" Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.070125 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.386632 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-88wlc"] Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.602797 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8kfk7"] Nov 28 21:01:00 crc kubenswrapper[4957]: W1128 21:01:00.616571 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod228af258_a007_4de0_922b_f434bc1e665b.slice/crio-da5f17eba46627dfea752cdac41c1dc00e083079ea320dd50f379984aa42f361 WatchSource:0}: Error finding container da5f17eba46627dfea752cdac41c1dc00e083079ea320dd50f379984aa42f361: Status 404 returned error can't find the container with id da5f17eba46627dfea752cdac41c1dc00e083079ea320dd50f379984aa42f361 Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.646728 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-gjw6l"] Nov 28 21:01:00 crc kubenswrapper[4957]: W1128 21:01:00.652764 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd388a87_03fe_4f7f_b36a_a89a8d110806.slice/crio-28ce24e14df8d5d432659100ce4459968807516cbd02335f0ad3a66c4e967c1e WatchSource:0}: Error finding container 28ce24e14df8d5d432659100ce4459968807516cbd02335f0ad3a66c4e967c1e: Status 404 returned error can't find the container with id 28ce24e14df8d5d432659100ce4459968807516cbd02335f0ad3a66c4e967c1e Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.959500 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" event={"ID":"228af258-a007-4de0-922b-f434bc1e665b","Type":"ContainerStarted","Data":"da5f17eba46627dfea752cdac41c1dc00e083079ea320dd50f379984aa42f361"} Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.961000 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" event={"ID":"bd388a87-03fe-4f7f-b36a-a89a8d110806","Type":"ContainerStarted","Data":"28ce24e14df8d5d432659100ce4459968807516cbd02335f0ad3a66c4e967c1e"} Nov 28 21:01:00 crc kubenswrapper[4957]: I1128 21:01:00.962534 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-88wlc" event={"ID":"e13a6b33-f471-46db-b7f2-98600799eaef","Type":"ContainerStarted","Data":"cce4f90d7c6eeab589129e5228566552d258fa4c69bf04a229c7fa7920975551"} Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.022846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" event={"ID":"228af258-a007-4de0-922b-f434bc1e665b","Type":"ContainerStarted","Data":"26f62309b22e189c452ada28cf889e830396afac544ec1e706a886530e69550b"} Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.024706 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" event={"ID":"bd388a87-03fe-4f7f-b36a-a89a8d110806","Type":"ContainerStarted","Data":"7cee7605b3e0544721a148be05a7ef9d88dc4b671a0907a03361b8f051ab3882"} Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.026221 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.026234 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-88wlc" event={"ID":"e13a6b33-f471-46db-b7f2-98600799eaef","Type":"ContainerStarted","Data":"a0880afabe160ccf477f3e9d98ff1d86c8f3962935bc907d97293a70320fbcf5"} Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.037006 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-8kfk7" podStartSLOduration=2.239168382 podStartE2EDuration="10.036988586s" podCreationTimestamp="2025-11-28 21:00:59 +0000 UTC" firstStartedPulling="2025-11-28 21:01:00.620592823 +0000 UTC m=+700.089240732" lastFinishedPulling="2025-11-28 21:01:08.418413027 +0000 UTC m=+707.887060936" observedRunningTime="2025-11-28 21:01:09.035581671 +0000 UTC m=+708.504229580" watchObservedRunningTime="2025-11-28 21:01:09.036988586 +0000 UTC m=+708.505636495" Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.053668 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" podStartSLOduration=2.221387641 podStartE2EDuration="10.053647929s" podCreationTimestamp="2025-11-28 21:00:59 +0000 UTC" firstStartedPulling="2025-11-28 21:01:00.655156621 +0000 UTC m=+700.123804530" lastFinishedPulling="2025-11-28 21:01:08.487416889 +0000 UTC m=+707.956064818" observedRunningTime="2025-11-28 21:01:09.050483031 +0000 UTC m=+708.519130940" watchObservedRunningTime="2025-11-28 21:01:09.053647929 +0000 UTC m=+708.522295838" Nov 28 21:01:09 crc kubenswrapper[4957]: I1128 21:01:09.067485 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-88wlc" podStartSLOduration=2.061934945 podStartE2EDuration="10.067465232s" podCreationTimestamp="2025-11-28 21:00:59 +0000 UTC" firstStartedPulling="2025-11-28 21:01:00.410204503 +0000 UTC m=+699.878852412" lastFinishedPulling="2025-11-28 21:01:08.41573477 +0000 UTC m=+707.884382699" observedRunningTime="2025-11-28 21:01:09.065664778 +0000 UTC m=+708.534312707" watchObservedRunningTime="2025-11-28 21:01:09.067465232 +0000 UTC m=+708.536113141" Nov 28 21:01:15 crc kubenswrapper[4957]: I1128 21:01:15.074271 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-gjw6l" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.252431 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp"] Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.255323 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.257953 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.260874 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp"] Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.374994 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.375047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpj9s\" (UniqueName: \"kubernetes.io/projected/758a064a-dbeb-49f3-b1d0-d7fdde81002b-kube-api-access-rpj9s\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.375106 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.460113 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d"] Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.461775 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.476593 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.476836 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpj9s\" (UniqueName: \"kubernetes.io/projected/758a064a-dbeb-49f3-b1d0-d7fdde81002b-kube-api-access-rpj9s\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.476951 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.477100 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.477420 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d"] Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.477748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.499929 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpj9s\" (UniqueName: \"kubernetes.io/projected/758a064a-dbeb-49f3-b1d0-d7fdde81002b-kube-api-access-rpj9s\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.574052 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.578965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.579142 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.579228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx7vm\" (UniqueName: \"kubernetes.io/projected/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-kube-api-access-dx7vm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.680848 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.681235 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.681275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx7vm\" (UniqueName: \"kubernetes.io/projected/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-kube-api-access-dx7vm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.682583 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.682804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.703235 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx7vm\" (UniqueName: \"kubernetes.io/projected/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-kube-api-access-dx7vm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.776366 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp"] Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.783485 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:44 crc kubenswrapper[4957]: I1128 21:01:44.968412 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" event={"ID":"758a064a-dbeb-49f3-b1d0-d7fdde81002b","Type":"ContainerStarted","Data":"d85ee834eef1267371ede7f153cf586372fc475378cca556be9f0db7fcdc360b"} Nov 28 21:01:45 crc kubenswrapper[4957]: I1128 21:01:45.036791 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d"] Nov 28 21:01:45 crc kubenswrapper[4957]: I1128 21:01:45.974536 4957 generic.go:334] "Generic (PLEG): container finished" podID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerID="e0d6873b810642dced410cae2f5d2f22623255a3ced3433d0ff3763d190f13e7" exitCode=0 Nov 28 21:01:45 crc kubenswrapper[4957]: I1128 21:01:45.974592 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" event={"ID":"0591b6e5-8805-4fd5-b1da-1d132f3a0e94","Type":"ContainerDied","Data":"e0d6873b810642dced410cae2f5d2f22623255a3ced3433d0ff3763d190f13e7"} Nov 28 21:01:45 crc kubenswrapper[4957]: I1128 21:01:45.974645 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" event={"ID":"0591b6e5-8805-4fd5-b1da-1d132f3a0e94","Type":"ContainerStarted","Data":"c7e1931853ba2e1413f8b23ac63afc214263caa9ca47ae3f2a4cbc909a7e1dbe"} Nov 28 21:01:45 crc kubenswrapper[4957]: I1128 21:01:45.975907 4957 generic.go:334] "Generic (PLEG): container finished" podID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerID="ebfb8f27dfa4105daa69afeb050e28b94667ee7dc5aedfaeeef36898998bd2c0" exitCode=0 Nov 28 21:01:45 crc kubenswrapper[4957]: I1128 21:01:45.975935 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" event={"ID":"758a064a-dbeb-49f3-b1d0-d7fdde81002b","Type":"ContainerDied","Data":"ebfb8f27dfa4105daa69afeb050e28b94667ee7dc5aedfaeeef36898998bd2c0"} Nov 28 21:01:47 crc kubenswrapper[4957]: I1128 21:01:47.990286 4957 generic.go:334] "Generic (PLEG): container finished" podID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerID="bdbc9258df9667c18e6825de4136b1be56a34de0af40d579f85017079ba63e0b" exitCode=0 Nov 28 21:01:47 crc kubenswrapper[4957]: I1128 21:01:47.990395 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" event={"ID":"0591b6e5-8805-4fd5-b1da-1d132f3a0e94","Type":"ContainerDied","Data":"bdbc9258df9667c18e6825de4136b1be56a34de0af40d579f85017079ba63e0b"} Nov 28 21:01:47 crc kubenswrapper[4957]: I1128 21:01:47.994703 4957 generic.go:334] "Generic (PLEG): container finished" podID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerID="0519f10571dda731dac4ace506f797b734e5a3f3ca0d96bcb8c64255a0b6940f" exitCode=0 Nov 28 21:01:47 crc kubenswrapper[4957]: I1128 21:01:47.994749 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" event={"ID":"758a064a-dbeb-49f3-b1d0-d7fdde81002b","Type":"ContainerDied","Data":"0519f10571dda731dac4ace506f797b734e5a3f3ca0d96bcb8c64255a0b6940f"} Nov 28 21:01:49 crc kubenswrapper[4957]: I1128 21:01:49.004291 4957 generic.go:334] "Generic (PLEG): container finished" podID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerID="f855e5522caaf8b76bd6f8e4ac082d2d711a6f566305a69b6e0cdae3ff0656a2" exitCode=0 Nov 28 21:01:49 crc kubenswrapper[4957]: I1128 21:01:49.004439 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" event={"ID":"0591b6e5-8805-4fd5-b1da-1d132f3a0e94","Type":"ContainerDied","Data":"f855e5522caaf8b76bd6f8e4ac082d2d711a6f566305a69b6e0cdae3ff0656a2"} Nov 28 21:01:49 crc kubenswrapper[4957]: I1128 21:01:49.007605 4957 generic.go:334] "Generic (PLEG): container finished" podID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerID="ab5145cfff139e75412c9f9f7b4ea781a55cf20f5510f9ef133c84bcb7e79636" exitCode=0 Nov 28 21:01:49 crc kubenswrapper[4957]: I1128 21:01:49.007652 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" event={"ID":"758a064a-dbeb-49f3-b1d0-d7fdde81002b","Type":"ContainerDied","Data":"ab5145cfff139e75412c9f9f7b4ea781a55cf20f5510f9ef133c84bcb7e79636"} Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.374425 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.375630 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.476748 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx7vm\" (UniqueName: \"kubernetes.io/projected/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-kube-api-access-dx7vm\") pod \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.476896 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-bundle\") pod \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.476937 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-util\") pod \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.476993 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-bundle\") pod \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\" (UID: \"0591b6e5-8805-4fd5-b1da-1d132f3a0e94\") " Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.477064 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpj9s\" (UniqueName: \"kubernetes.io/projected/758a064a-dbeb-49f3-b1d0-d7fdde81002b-kube-api-access-rpj9s\") pod \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.477095 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-util\") pod \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\" (UID: \"758a064a-dbeb-49f3-b1d0-d7fdde81002b\") " Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.478065 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-bundle" (OuterVolumeSpecName: "bundle") pod "758a064a-dbeb-49f3-b1d0-d7fdde81002b" (UID: "758a064a-dbeb-49f3-b1d0-d7fdde81002b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.478237 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-bundle" (OuterVolumeSpecName: "bundle") pod "0591b6e5-8805-4fd5-b1da-1d132f3a0e94" (UID: "0591b6e5-8805-4fd5-b1da-1d132f3a0e94"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.487456 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758a064a-dbeb-49f3-b1d0-d7fdde81002b-kube-api-access-rpj9s" (OuterVolumeSpecName: "kube-api-access-rpj9s") pod "758a064a-dbeb-49f3-b1d0-d7fdde81002b" (UID: "758a064a-dbeb-49f3-b1d0-d7fdde81002b"). InnerVolumeSpecName "kube-api-access-rpj9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.488856 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-kube-api-access-dx7vm" (OuterVolumeSpecName: "kube-api-access-dx7vm") pod "0591b6e5-8805-4fd5-b1da-1d132f3a0e94" (UID: "0591b6e5-8805-4fd5-b1da-1d132f3a0e94"). InnerVolumeSpecName "kube-api-access-dx7vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.493333 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-util" (OuterVolumeSpecName: "util") pod "0591b6e5-8805-4fd5-b1da-1d132f3a0e94" (UID: "0591b6e5-8805-4fd5-b1da-1d132f3a0e94"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.497263 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-util" (OuterVolumeSpecName: "util") pod "758a064a-dbeb-49f3-b1d0-d7fdde81002b" (UID: "758a064a-dbeb-49f3-b1d0-d7fdde81002b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.579528 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx7vm\" (UniqueName: \"kubernetes.io/projected/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-kube-api-access-dx7vm\") on node \"crc\" DevicePath \"\"" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.579581 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.579597 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-util\") on node \"crc\" DevicePath \"\"" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.579609 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0591b6e5-8805-4fd5-b1da-1d132f3a0e94-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.579625 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpj9s\" (UniqueName: \"kubernetes.io/projected/758a064a-dbeb-49f3-b1d0-d7fdde81002b-kube-api-access-rpj9s\") on node \"crc\" DevicePath \"\"" Nov 28 21:01:50 crc kubenswrapper[4957]: I1128 21:01:50.579636 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/758a064a-dbeb-49f3-b1d0-d7fdde81002b-util\") on node \"crc\" DevicePath \"\"" Nov 28 21:01:51 crc kubenswrapper[4957]: I1128 21:01:51.023932 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" event={"ID":"0591b6e5-8805-4fd5-b1da-1d132f3a0e94","Type":"ContainerDied","Data":"c7e1931853ba2e1413f8b23ac63afc214263caa9ca47ae3f2a4cbc909a7e1dbe"} Nov 28 21:01:51 crc kubenswrapper[4957]: I1128 21:01:51.024021 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e1931853ba2e1413f8b23ac63afc214263caa9ca47ae3f2a4cbc909a7e1dbe" Nov 28 21:01:51 crc kubenswrapper[4957]: I1128 21:01:51.024007 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d" Nov 28 21:01:51 crc kubenswrapper[4957]: I1128 21:01:51.026737 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" event={"ID":"758a064a-dbeb-49f3-b1d0-d7fdde81002b","Type":"ContainerDied","Data":"d85ee834eef1267371ede7f153cf586372fc475378cca556be9f0db7fcdc360b"} Nov 28 21:01:51 crc kubenswrapper[4957]: I1128 21:01:51.026808 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85ee834eef1267371ede7f153cf586372fc475378cca556be9f0db7fcdc360b" Nov 28 21:01:51 crc kubenswrapper[4957]: I1128 21:01:51.026986 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp" Nov 28 21:01:55 crc kubenswrapper[4957]: I1128 21:01:55.459716 4957 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.177810 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x"] Nov 28 21:02:02 crc kubenswrapper[4957]: E1128 21:02:02.178612 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="util" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178640 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="util" Nov 28 21:02:02 crc kubenswrapper[4957]: E1128 21:02:02.178652 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="extract" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178659 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="extract" Nov 28 21:02:02 crc kubenswrapper[4957]: E1128 21:02:02.178668 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="util" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178675 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="util" Nov 28 21:02:02 crc kubenswrapper[4957]: E1128 21:02:02.178688 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="pull" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178696 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="pull" Nov 28 21:02:02 crc kubenswrapper[4957]: E1128 21:02:02.178712 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="extract" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178719 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="extract" Nov 28 21:02:02 crc kubenswrapper[4957]: E1128 21:02:02.178739 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="pull" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178745 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="pull" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178883 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0591b6e5-8805-4fd5-b1da-1d132f3a0e94" containerName="extract" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.178899 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="758a064a-dbeb-49f3-b1d0-d7fdde81002b" containerName="extract" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.179721 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.182489 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.182891 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.184859 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.185345 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-f7hjw" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.185362 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.185589 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.192684 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x"] Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.346662 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29bhk\" (UniqueName: \"kubernetes.io/projected/9ea32e2a-3b67-44d4-a881-32a968981c1c-kube-api-access-29bhk\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.346744 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-apiservice-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.346917 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9ea32e2a-3b67-44d4-a881-32a968981c1c-manager-config\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.347012 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.347040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-webhook-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.448167 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29bhk\" (UniqueName: \"kubernetes.io/projected/9ea32e2a-3b67-44d4-a881-32a968981c1c-kube-api-access-29bhk\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.448302 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-apiservice-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.448336 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9ea32e2a-3b67-44d4-a881-32a968981c1c-manager-config\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.448374 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.448398 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-webhook-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.449739 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9ea32e2a-3b67-44d4-a881-32a968981c1c-manager-config\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.454231 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-apiservice-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.454772 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-webhook-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.455064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ea32e2a-3b67-44d4-a881-32a968981c1c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.474190 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29bhk\" (UniqueName: \"kubernetes.io/projected/9ea32e2a-3b67-44d4-a881-32a968981c1c-kube-api-access-29bhk\") pod \"loki-operator-controller-manager-69579bc464-22g8x\" (UID: \"9ea32e2a-3b67-44d4-a881-32a968981c1c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.498804 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:02 crc kubenswrapper[4957]: I1128 21:02:02.929050 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x"] Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.046296 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-bw9vg"] Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.047607 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.051118 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.051295 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.051320 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-htjg8" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.065627 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-bw9vg"] Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.118568 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" event={"ID":"9ea32e2a-3b67-44d4-a881-32a968981c1c","Type":"ContainerStarted","Data":"2aab437c8569984880f6093ce8b6c93cd059e94ef6aaff820f47c3dd7d6b0da1"} Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.158078 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdm5f\" (UniqueName: \"kubernetes.io/projected/65a613a8-4720-4ef2-be4d-dceeee3ce44e-kube-api-access-qdm5f\") pod \"cluster-logging-operator-ff9846bd-bw9vg\" (UID: \"65a613a8-4720-4ef2-be4d-dceeee3ce44e\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.259352 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdm5f\" (UniqueName: \"kubernetes.io/projected/65a613a8-4720-4ef2-be4d-dceeee3ce44e-kube-api-access-qdm5f\") pod \"cluster-logging-operator-ff9846bd-bw9vg\" (UID: \"65a613a8-4720-4ef2-be4d-dceeee3ce44e\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.278952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdm5f\" (UniqueName: \"kubernetes.io/projected/65a613a8-4720-4ef2-be4d-dceeee3ce44e-kube-api-access-qdm5f\") pod \"cluster-logging-operator-ff9846bd-bw9vg\" (UID: \"65a613a8-4720-4ef2-be4d-dceeee3ce44e\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.367444 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" Nov 28 21:02:03 crc kubenswrapper[4957]: I1128 21:02:03.790002 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-bw9vg"] Nov 28 21:02:03 crc kubenswrapper[4957]: W1128 21:02:03.794484 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a613a8_4720_4ef2_be4d_dceeee3ce44e.slice/crio-307c20f53db90d77724184fd5749c837848a0751b0419ce95f01781818a18b50 WatchSource:0}: Error finding container 307c20f53db90d77724184fd5749c837848a0751b0419ce95f01781818a18b50: Status 404 returned error can't find the container with id 307c20f53db90d77724184fd5749c837848a0751b0419ce95f01781818a18b50 Nov 28 21:02:04 crc kubenswrapper[4957]: I1128 21:02:04.125284 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" event={"ID":"65a613a8-4720-4ef2-be4d-dceeee3ce44e","Type":"ContainerStarted","Data":"307c20f53db90d77724184fd5749c837848a0751b0419ce95f01781818a18b50"} Nov 28 21:02:07 crc kubenswrapper[4957]: I1128 21:02:07.144258 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" event={"ID":"9ea32e2a-3b67-44d4-a881-32a968981c1c","Type":"ContainerStarted","Data":"e6c4a7b79ae3a298cb8032eace5f24fafed6f6552fc473b47fc06ebe99380d90"} Nov 28 21:02:10 crc kubenswrapper[4957]: I1128 21:02:10.165580 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" event={"ID":"65a613a8-4720-4ef2-be4d-dceeee3ce44e","Type":"ContainerStarted","Data":"40d395d7cf81d02ae54977a06a9d677ffc8dad2e0e470dc2545b209eef7b6356"} Nov 28 21:02:10 crc kubenswrapper[4957]: I1128 21:02:10.195305 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-bw9vg" podStartSLOduration=1.943491941 podStartE2EDuration="7.195283686s" podCreationTimestamp="2025-11-28 21:02:03 +0000 UTC" firstStartedPulling="2025-11-28 21:02:03.797330549 +0000 UTC m=+763.265978468" lastFinishedPulling="2025-11-28 21:02:09.049122304 +0000 UTC m=+768.517770213" observedRunningTime="2025-11-28 21:02:10.179990182 +0000 UTC m=+769.648638111" watchObservedRunningTime="2025-11-28 21:02:10.195283686 +0000 UTC m=+769.663931585" Nov 28 21:02:14 crc kubenswrapper[4957]: I1128 21:02:14.196392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" event={"ID":"9ea32e2a-3b67-44d4-a881-32a968981c1c","Type":"ContainerStarted","Data":"c068cd788c7af3fe40b5d58a1caa09950a591fd7f04ffe9e27838e5acdc51f70"} Nov 28 21:02:14 crc kubenswrapper[4957]: I1128 21:02:14.196967 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:14 crc kubenswrapper[4957]: I1128 21:02:14.200894 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" Nov 28 21:02:14 crc kubenswrapper[4957]: I1128 21:02:14.221744 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-69579bc464-22g8x" podStartSLOduration=1.481953441 podStartE2EDuration="12.221728931s" podCreationTimestamp="2025-11-28 21:02:02 +0000 UTC" firstStartedPulling="2025-11-28 21:02:02.942864047 +0000 UTC m=+762.411511956" lastFinishedPulling="2025-11-28 21:02:13.682639547 +0000 UTC m=+773.151287446" observedRunningTime="2025-11-28 21:02:14.216871533 +0000 UTC m=+773.685519442" watchObservedRunningTime="2025-11-28 21:02:14.221728931 +0000 UTC m=+773.690376840" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.435079 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.436724 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.440648 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.441423 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.449192 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.503001 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") " pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.503123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w64qs\" (UniqueName: \"kubernetes.io/projected/59da9416-a60f-4cb9-af43-edcfd4694cf2-kube-api-access-w64qs\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") " pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.605093 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w64qs\" (UniqueName: \"kubernetes.io/projected/59da9416-a60f-4cb9-af43-edcfd4694cf2-kube-api-access-w64qs\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") " pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.605435 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") " pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.611239 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.611489 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b2190bd5b459de0b7ad29d5e0c7f8e76c475a36a5e86ec528d7d58ba6d801ba/globalmount\"" pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.629809 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w64qs\" (UniqueName: \"kubernetes.io/projected/59da9416-a60f-4cb9-af43-edcfd4694cf2-kube-api-access-w64qs\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") " pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.658185 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ef4a926-c780-4b3b-a9c0-3029e9e5f400\") pod \"minio\" (UID: \"59da9416-a60f-4cb9-af43-edcfd4694cf2\") " pod="minio-dev/minio" Nov 28 21:02:18 crc kubenswrapper[4957]: I1128 21:02:18.766155 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 28 21:02:19 crc kubenswrapper[4957]: I1128 21:02:19.274157 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 28 21:02:20 crc kubenswrapper[4957]: I1128 21:02:20.250797 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"59da9416-a60f-4cb9-af43-edcfd4694cf2","Type":"ContainerStarted","Data":"e274d9d8004a824be6a1e2acecdb11808fffa43d8f230b8c314491c2355921ae"} Nov 28 21:02:23 crc kubenswrapper[4957]: I1128 21:02:23.268988 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"59da9416-a60f-4cb9-af43-edcfd4694cf2","Type":"ContainerStarted","Data":"53f54efae3345ca7d07dba749c80f67a4a78b3df658d8a73f9076f5a04aadcce"} Nov 28 21:02:23 crc kubenswrapper[4957]: I1128 21:02:23.287792 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.280358714 podStartE2EDuration="7.287754289s" podCreationTimestamp="2025-11-28 21:02:16 +0000 UTC" firstStartedPulling="2025-11-28 21:02:19.286989162 +0000 UTC m=+778.755637071" lastFinishedPulling="2025-11-28 21:02:22.294384737 +0000 UTC m=+781.763032646" observedRunningTime="2025-11-28 21:02:23.285246447 +0000 UTC m=+782.753894366" watchObservedRunningTime="2025-11-28 21:02:23.287754289 +0000 UTC m=+782.756402208" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.294059 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.296961 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.303172 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.303245 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.303564 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.303688 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.303791 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-8jd8w" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.316414 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.428661 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.428986 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.429323 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.429440 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67552d33-b77e-41cc-8233-16009aa347ca-config\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.429612 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtf8\" (UniqueName: \"kubernetes.io/projected/67552d33-b77e-41cc-8233-16009aa347ca-kube-api-access-zmtf8\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.446639 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-2cmml"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.447507 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.451979 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.452566 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.452836 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.474652 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-2cmml"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.530715 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmtf8\" (UniqueName: \"kubernetes.io/projected/67552d33-b77e-41cc-8233-16009aa347ca-kube-api-access-zmtf8\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.530814 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.530865 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.530894 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.530915 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67552d33-b77e-41cc-8233-16009aa347ca-config\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.531952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.533690 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67552d33-b77e-41cc-8233-16009aa347ca-config\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.542026 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.559027 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/67552d33-b77e-41cc-8233-16009aa347ca-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.565184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmtf8\" (UniqueName: \"kubernetes.io/projected/67552d33-b77e-41cc-8233-16009aa347ca-kube-api-access-zmtf8\") pod \"logging-loki-distributor-76cc67bf56-4r6v6\" (UID: \"67552d33-b77e-41cc-8233-16009aa347ca\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.572857 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.574012 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.578681 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.578920 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.580235 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.632735 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.632835 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.632904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.632934 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22kb2\" (UniqueName: \"kubernetes.io/projected/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-kube-api-access-22kb2\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.632982 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.633043 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-config\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.638645 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734027 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba532acb-af97-43b9-b61b-e54721951c1a-config\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734134 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-config\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734239 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pnzc\" (UniqueName: \"kubernetes.io/projected/ba532acb-af97-43b9-b61b-e54721951c1a-kube-api-access-2pnzc\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734261 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734279 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734310 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734334 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734353 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.734374 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22kb2\" (UniqueName: \"kubernetes.io/projected/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-kube-api-access-22kb2\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.735103 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-config\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.735646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.742612 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.756759 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.758443 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.784952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22kb2\" (UniqueName: \"kubernetes.io/projected/c90dbf3d-fe84-45cc-baca-5cbc545bbb53-kube-api-access-22kb2\") pod \"logging-loki-querier-5895d59bb8-2cmml\" (UID: \"c90dbf3d-fe84-45cc-baca-5cbc545bbb53\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.836892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pnzc\" (UniqueName: \"kubernetes.io/projected/ba532acb-af97-43b9-b61b-e54721951c1a-kube-api-access-2pnzc\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.836946 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.836979 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.837007 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.837048 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba532acb-af97-43b9-b61b-e54721951c1a-config\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.838075 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba532acb-af97-43b9-b61b-e54721951c1a-config\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.838143 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.841920 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.842175 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ba532acb-af97-43b9-b61b-e54721951c1a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.860957 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-9667d547d-p9fzr"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.862049 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pnzc\" (UniqueName: \"kubernetes.io/projected/ba532acb-af97-43b9-b61b-e54721951c1a-kube-api-access-2pnzc\") pod \"logging-loki-query-frontend-84558f7c9f-6rjgr\" (UID: \"ba532acb-af97-43b9-b61b-e54721951c1a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.862192 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.866633 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.866793 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.866879 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.866909 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-l9jw7" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.866998 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.875559 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.889727 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-9667d547d-gtfb8"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.891172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.926650 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9667d547d-p9fzr"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.939686 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9667d547d-gtfb8"] Nov 28 21:02:27 crc kubenswrapper[4957]: I1128 21:02:27.940462 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.042825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-lokistack-gateway\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.042875 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.042930 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jw2\" (UniqueName: \"kubernetes.io/projected/e2dbe144-6748-4562-888e-ac850bb6c0b4-kube-api-access-r5jw2\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.042961 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-lokistack-gateway\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043000 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-rbac\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043143 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043170 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-tls-secret\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043197 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043293 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-tenants\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043326 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-rbac\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043341 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043366 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-tenants\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043405 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9h9x\" (UniqueName: \"kubernetes.io/projected/0ae348b1-f460-4971-bd3f-6832a96d1f70-kube-api-access-p9h9x\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043422 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-tls-secret\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.043446 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.069069 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.144867 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-tenants\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.144941 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9h9x\" (UniqueName: \"kubernetes.io/projected/0ae348b1-f460-4971-bd3f-6832a96d1f70-kube-api-access-p9h9x\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.144976 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-tls-secret\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145000 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-lokistack-gateway\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145101 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145142 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5jw2\" (UniqueName: \"kubernetes.io/projected/e2dbe144-6748-4562-888e-ac850bb6c0b4-kube-api-access-r5jw2\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145165 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-lokistack-gateway\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145185 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145237 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-rbac\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145260 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145280 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-tls-secret\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145313 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145520 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-tenants\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145553 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-rbac\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.145614 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.147437 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-rbac\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.148034 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.148957 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.149834 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-tls-secret\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.150163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-lokistack-gateway\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.150499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-lokistack-gateway\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.151048 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.151622 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-tenants\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.152010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.152370 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0ae348b1-f460-4971-bd3f-6832a96d1f70-rbac\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.153551 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-tls-secret\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.157532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e2dbe144-6748-4562-888e-ac850bb6c0b4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.157675 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-tenants\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.163454 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9h9x\" (UniqueName: \"kubernetes.io/projected/0ae348b1-f460-4971-bd3f-6832a96d1f70-kube-api-access-p9h9x\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.166742 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0ae348b1-f460-4971-bd3f-6832a96d1f70-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9667d547d-gtfb8\" (UID: \"0ae348b1-f460-4971-bd3f-6832a96d1f70\") " pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.173129 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5jw2\" (UniqueName: \"kubernetes.io/projected/e2dbe144-6748-4562-888e-ac850bb6c0b4-kube-api-access-r5jw2\") pod \"logging-loki-gateway-9667d547d-p9fzr\" (UID: \"e2dbe144-6748-4562-888e-ac850bb6c0b4\") " pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.207588 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.210172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:28 crc kubenswrapper[4957]: W1128 21:02:28.217876 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67552d33_b77e_41cc_8233_16009aa347ca.slice/crio-15b1cd2591d5e72fbb96f7af6eff7ff87b0ab1b7bd6bb0944c182c697b10a9d7 WatchSource:0}: Error finding container 15b1cd2591d5e72fbb96f7af6eff7ff87b0ab1b7bd6bb0944c182c697b10a9d7: Status 404 returned error can't find the container with id 15b1cd2591d5e72fbb96f7af6eff7ff87b0ab1b7bd6bb0944c182c697b10a9d7 Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.234505 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.306697 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" event={"ID":"67552d33-b77e-41cc-8233-16009aa347ca","Type":"ContainerStarted","Data":"15b1cd2591d5e72fbb96f7af6eff7ff87b0ab1b7bd6bb0944c182c697b10a9d7"} Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.353952 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.439506 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.441133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.443071 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.448663 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.449290 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 28 21:02:28 crc kubenswrapper[4957]: W1128 21:02:28.472020 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2dbe144_6748_4562_888e_ac850bb6c0b4.slice/crio-13a31f3e9e817541e9b5f7a5d7ffa0360c55acc9d965445d96904aa623e5b2f2 WatchSource:0}: Error finding container 13a31f3e9e817541e9b5f7a5d7ffa0360c55acc9d965445d96904aa623e5b2f2: Status 404 returned error can't find the container with id 13a31f3e9e817541e9b5f7a5d7ffa0360c55acc9d965445d96904aa623e5b2f2 Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.477373 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9667d547d-p9fzr"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.494666 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-2cmml"] Nov 28 21:02:28 crc kubenswrapper[4957]: W1128 21:02:28.498406 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc90dbf3d_fe84_45cc_baca_5cbc545bbb53.slice/crio-b61fa583f928da317851c0a7bd8a0fdf8e2dd9787b4d2b35246488dff102c053 WatchSource:0}: Error finding container b61fa583f928da317851c0a7bd8a0fdf8e2dd9787b4d2b35246488dff102c053: Status 404 returned error can't find the container with id b61fa583f928da317851c0a7bd8a0fdf8e2dd9787b4d2b35246488dff102c053 Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.526793 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.527943 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.534816 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.535080 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9667d547d-gtfb8"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.535146 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 28 21:02:28 crc kubenswrapper[4957]: W1128 21:02:28.535562 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae348b1_f460_4971_bd3f_6832a96d1f70.slice/crio-92069e2e6a5e2b75957a5bb06eee1fab68d41c4fea3a71f0853dffda1316bafb WatchSource:0}: Error finding container 92069e2e6a5e2b75957a5bb06eee1fab68d41c4fea3a71f0853dffda1316bafb: Status 404 returned error can't find the container with id 92069e2e6a5e2b75957a5bb06eee1fab68d41c4fea3a71f0853dffda1316bafb Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.539625 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.551924 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqfb9\" (UniqueName: \"kubernetes.io/projected/3866c99e-7b87-4a00-9df5-b121467d603e-kube-api-access-dqfb9\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.551983 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.552020 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866c99e-7b87-4a00-9df5-b121467d603e-config\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.552042 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.552080 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.552115 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.552133 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.552167 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9554a080-b3c2-408d-8251-5257d29cab40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9554a080-b3c2-408d-8251-5257d29cab40\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.632627 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.634284 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.636851 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.636966 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.639984 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656526 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656578 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656608 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656633 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825nd\" (UniqueName: \"kubernetes.io/projected/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-kube-api-access-825nd\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656691 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-config\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656729 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9554a080-b3c2-408d-8251-5257d29cab40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9554a080-b3c2-408d-8251-5257d29cab40\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656758 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqfb9\" (UniqueName: \"kubernetes.io/projected/3866c99e-7b87-4a00-9df5-b121467d603e-kube-api-access-dqfb9\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656799 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656860 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656903 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866c99e-7b87-4a00-9df5-b121467d603e-config\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.656972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.657009 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.657045 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.659817 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866c99e-7b87-4a00-9df5-b121467d603e-config\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.660051 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.662226 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.662266 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4f3525f9f4e995d2e69fffea307a865f8f92399af6afbe2bb55d6bbd565382f/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.662757 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.662809 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9554a080-b3c2-408d-8251-5257d29cab40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9554a080-b3c2-408d-8251-5257d29cab40\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/89cec3eae4f7ad2c85fedd9515e0f6a2735f411ec4ad88b58be63388f4af90eb/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.663589 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.663703 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.664614 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/3866c99e-7b87-4a00-9df5-b121467d603e-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.674896 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqfb9\" (UniqueName: \"kubernetes.io/projected/3866c99e-7b87-4a00-9df5-b121467d603e-kube-api-access-dqfb9\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.703192 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9554a080-b3c2-408d-8251-5257d29cab40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9554a080-b3c2-408d-8251-5257d29cab40\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.708021 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6db88438-3a3d-49a4-844a-43b0d2ff1211\") pod \"logging-loki-ingester-0\" (UID: \"3866c99e-7b87-4a00-9df5-b121467d603e\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758428 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825nd\" (UniqueName: \"kubernetes.io/projected/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-kube-api-access-825nd\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758463 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758499 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758523 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758542 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758649 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-config\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758704 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtgh\" (UniqueName: \"kubernetes.io/projected/cbbe211e-0fa5-42f6-830e-4feb479b2b58-kube-api-access-nhtgh\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758724 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758750 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758917 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.758963 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbe211e-0fa5-42f6-830e-4feb479b2b58-config\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.759530 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.760017 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-config\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.760843 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.760872 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8165006fc64e66629a7561a0dbfe2e1d8b7493530d24ff12a6f61e0c7054ee25/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.762345 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.762965 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.763077 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.774577 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825nd\" (UniqueName: \"kubernetes.io/projected/b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1-kube-api-access-825nd\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.784133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.789499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d708d528-71ed-4d2e-856e-d551cf87aff7\") pod \"logging-loki-compactor-0\" (UID: \"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.860237 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.860302 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtgh\" (UniqueName: \"kubernetes.io/projected/cbbe211e-0fa5-42f6-830e-4feb479b2b58-kube-api-access-nhtgh\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.861026 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.863331 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.863448 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.863477 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbe211e-0fa5-42f6-830e-4feb479b2b58-config\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.863565 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.863638 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.864834 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbe211e-0fa5-42f6-830e-4feb479b2b58-config\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.867109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.867434 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.867946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cbbe211e-0fa5-42f6-830e-4feb479b2b58-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.879907 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtgh\" (UniqueName: \"kubernetes.io/projected/cbbe211e-0fa5-42f6-830e-4feb479b2b58-kube-api-access-nhtgh\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.891987 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.893508 4957 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.893556 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1b3729f0f2d29d1dfb2c4db7b4ec0957de29a911b4e829c4997649c5f007d79/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.924819 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a80b49a7-7f9b-4129-8139-49f61d1650db\") pod \"logging-loki-index-gateway-0\" (UID: \"cbbe211e-0fa5-42f6-830e-4feb479b2b58\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:28 crc kubenswrapper[4957]: I1128 21:02:28.962244 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.202653 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 21:02:29 crc kubenswrapper[4957]: W1128 21:02:29.209915 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3866c99e_7b87_4a00_9df5_b121467d603e.slice/crio-6af50ecca0f1646607df55ce3fc903ea2addff1adecd084e6266cecd9c1db5fd WatchSource:0}: Error finding container 6af50ecca0f1646607df55ce3fc903ea2addff1adecd084e6266cecd9c1db5fd: Status 404 returned error can't find the container with id 6af50ecca0f1646607df55ce3fc903ea2addff1adecd084e6266cecd9c1db5fd Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.314993 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" event={"ID":"e2dbe144-6748-4562-888e-ac850bb6c0b4","Type":"ContainerStarted","Data":"13a31f3e9e817541e9b5f7a5d7ffa0360c55acc9d965445d96904aa623e5b2f2"} Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.316188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" event={"ID":"c90dbf3d-fe84-45cc-baca-5cbc545bbb53","Type":"ContainerStarted","Data":"b61fa583f928da317851c0a7bd8a0fdf8e2dd9787b4d2b35246488dff102c053"} Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.317287 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" event={"ID":"0ae348b1-f460-4971-bd3f-6832a96d1f70","Type":"ContainerStarted","Data":"92069e2e6a5e2b75957a5bb06eee1fab68d41c4fea3a71f0853dffda1316bafb"} Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.320612 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"3866c99e-7b87-4a00-9df5-b121467d603e","Type":"ContainerStarted","Data":"6af50ecca0f1646607df55ce3fc903ea2addff1adecd084e6266cecd9c1db5fd"} Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.322540 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" event={"ID":"ba532acb-af97-43b9-b61b-e54721951c1a","Type":"ContainerStarted","Data":"a8679aa469f2e05e5a569fc41d8b815ec320756307f01081a887259028fd9071"} Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.334286 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 21:02:29 crc kubenswrapper[4957]: W1128 21:02:29.337488 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7f3cf26_8b94_4ed1_a709_f5a5a36b40b1.slice/crio-ec3875efd347d8b2bcca0efe8c69dae79381afa48f196803ff3ad870afb3927c WatchSource:0}: Error finding container ec3875efd347d8b2bcca0efe8c69dae79381afa48f196803ff3ad870afb3927c: Status 404 returned error can't find the container with id ec3875efd347d8b2bcca0efe8c69dae79381afa48f196803ff3ad870afb3927c Nov 28 21:02:29 crc kubenswrapper[4957]: I1128 21:02:29.379770 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 21:02:29 crc kubenswrapper[4957]: W1128 21:02:29.394280 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbbe211e_0fa5_42f6_830e_4feb479b2b58.slice/crio-83512be29f2aab798e0ea5bb7d9cbe2ac53461ce9292ceedef66d85e600a6224 WatchSource:0}: Error finding container 83512be29f2aab798e0ea5bb7d9cbe2ac53461ce9292ceedef66d85e600a6224: Status 404 returned error can't find the container with id 83512be29f2aab798e0ea5bb7d9cbe2ac53461ce9292ceedef66d85e600a6224 Nov 28 21:02:30 crc kubenswrapper[4957]: I1128 21:02:30.329716 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"cbbe211e-0fa5-42f6-830e-4feb479b2b58","Type":"ContainerStarted","Data":"83512be29f2aab798e0ea5bb7d9cbe2ac53461ce9292ceedef66d85e600a6224"} Nov 28 21:02:30 crc kubenswrapper[4957]: I1128 21:02:30.330434 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1","Type":"ContainerStarted","Data":"ec3875efd347d8b2bcca0efe8c69dae79381afa48f196803ff3ad870afb3927c"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.392267 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"cbbe211e-0fa5-42f6-830e-4feb479b2b58","Type":"ContainerStarted","Data":"733d527f25be492a7aea7ec39148c80370559ec647a9367cb6a409b60733c3e0"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.392846 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.396180 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" event={"ID":"ba532acb-af97-43b9-b61b-e54721951c1a","Type":"ContainerStarted","Data":"51a5d6163ad715e289067ac17cd9a776c1ea7fbf867e04a6db0f61ea74126b6f"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.400365 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" event={"ID":"c90dbf3d-fe84-45cc-baca-5cbc545bbb53","Type":"ContainerStarted","Data":"aed66cec182540be1a7b84a3e0a6661cf231ff5df17b03e2ef8e2314df09c88a"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.401160 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.403291 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" event={"ID":"e2dbe144-6748-4562-888e-ac850bb6c0b4","Type":"ContainerStarted","Data":"f8236d2a256a437d4c7b29a52154da698e48422218df26335ad6cc462afa695c"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.406245 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" event={"ID":"0ae348b1-f460-4971-bd3f-6832a96d1f70","Type":"ContainerStarted","Data":"371fdd39d578c4e295be82e23b994b1cc0a4dafa95cb33a46d079e59c4c42bab"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.407560 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1","Type":"ContainerStarted","Data":"89fe627c68efc027f1251d6b1aed497417e2d2373be9d3d39c3dc1633a792673"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.408540 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.410394 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" event={"ID":"67552d33-b77e-41cc-8233-16009aa347ca","Type":"ContainerStarted","Data":"184dc92cb0b2511d6a9a39577775c2456da6050a5f3b005fe660db7c83ebe58c"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.412935 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.422454 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.39392666 podStartE2EDuration="11.422430183s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:29.397043402 +0000 UTC m=+788.865691311" lastFinishedPulling="2025-11-28 21:02:37.425546925 +0000 UTC m=+796.894194834" observedRunningTime="2025-11-28 21:02:38.414098989 +0000 UTC m=+797.882746939" watchObservedRunningTime="2025-11-28 21:02:38.422430183 +0000 UTC m=+797.891078102" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.425186 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"3866c99e-7b87-4a00-9df5-b121467d603e","Type":"ContainerStarted","Data":"ca811ba408f956db2e1e8a16d0aabdb6c47d8cba972cae3e5d90961674d5503d"} Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.425434 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.434640 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.349698028 podStartE2EDuration="11.434612612s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:29.33977833 +0000 UTC m=+788.808426239" lastFinishedPulling="2025-11-28 21:02:37.424692914 +0000 UTC m=+796.893340823" observedRunningTime="2025-11-28 21:02:38.432697215 +0000 UTC m=+797.901345134" watchObservedRunningTime="2025-11-28 21:02:38.434612612 +0000 UTC m=+797.903260531" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.461240 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" podStartSLOduration=2.277401793 podStartE2EDuration="11.461194422s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:28.228345988 +0000 UTC m=+787.696993897" lastFinishedPulling="2025-11-28 21:02:37.412138617 +0000 UTC m=+796.880786526" observedRunningTime="2025-11-28 21:02:38.456154599 +0000 UTC m=+797.924802518" watchObservedRunningTime="2025-11-28 21:02:38.461194422 +0000 UTC m=+797.929842371" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.477485 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" podStartSLOduration=2.5035224080000003 podStartE2EDuration="11.47746323s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:28.379906678 +0000 UTC m=+787.848554587" lastFinishedPulling="2025-11-28 21:02:37.3538475 +0000 UTC m=+796.822495409" observedRunningTime="2025-11-28 21:02:38.472730604 +0000 UTC m=+797.941378543" watchObservedRunningTime="2025-11-28 21:02:38.47746323 +0000 UTC m=+797.946111139" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.501001 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" podStartSLOduration=2.577116359 podStartE2EDuration="11.500974946s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:28.500781296 +0000 UTC m=+787.969429195" lastFinishedPulling="2025-11-28 21:02:37.424639873 +0000 UTC m=+796.893287782" observedRunningTime="2025-11-28 21:02:38.500700769 +0000 UTC m=+797.969348688" watchObservedRunningTime="2025-11-28 21:02:38.500974946 +0000 UTC m=+797.969622865" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.525360 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.257611784 podStartE2EDuration="11.525339642s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:29.212189418 +0000 UTC m=+788.680837327" lastFinishedPulling="2025-11-28 21:02:37.479917276 +0000 UTC m=+796.948565185" observedRunningTime="2025-11-28 21:02:38.522827131 +0000 UTC m=+797.991475080" watchObservedRunningTime="2025-11-28 21:02:38.525339642 +0000 UTC m=+797.993987561" Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.992789 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:02:38 crc kubenswrapper[4957]: I1128 21:02:38.993303 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:02:39 crc kubenswrapper[4957]: I1128 21:02:39.431782 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.441616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" event={"ID":"e2dbe144-6748-4562-888e-ac850bb6c0b4","Type":"ContainerStarted","Data":"f02e97edabe9ffc78283fa184e5686dac807d9144080c55eb2b61e99e29ba7a7"} Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.442037 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.442638 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.445689 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" event={"ID":"0ae348b1-f460-4971-bd3f-6832a96d1f70","Type":"ContainerStarted","Data":"60e9baefc47c596f80ed630c9cbb0781a9d943a0027c38364b77360c8dbde4b4"} Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.453306 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.462382 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.471670 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-9667d547d-p9fzr" podStartSLOduration=2.255724392 podStartE2EDuration="13.471645177s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:28.474727808 +0000 UTC m=+787.943375717" lastFinishedPulling="2025-11-28 21:02:39.690648563 +0000 UTC m=+799.159296502" observedRunningTime="2025-11-28 21:02:40.465968829 +0000 UTC m=+799.934616808" watchObservedRunningTime="2025-11-28 21:02:40.471645177 +0000 UTC m=+799.940293126" Nov 28 21:02:40 crc kubenswrapper[4957]: I1128 21:02:40.529559 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" podStartSLOduration=2.379633516 podStartE2EDuration="13.529534984s" podCreationTimestamp="2025-11-28 21:02:27 +0000 UTC" firstStartedPulling="2025-11-28 21:02:28.538679384 +0000 UTC m=+788.007327293" lastFinishedPulling="2025-11-28 21:02:39.688580842 +0000 UTC m=+799.157228761" observedRunningTime="2025-11-28 21:02:40.524324657 +0000 UTC m=+799.992972616" watchObservedRunningTime="2025-11-28 21:02:40.529534984 +0000 UTC m=+799.998182903" Nov 28 21:02:41 crc kubenswrapper[4957]: I1128 21:02:41.452371 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:41 crc kubenswrapper[4957]: I1128 21:02:41.452804 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:41 crc kubenswrapper[4957]: I1128 21:02:41.463531 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:41 crc kubenswrapper[4957]: I1128 21:02:41.463673 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9667d547d-gtfb8" Nov 28 21:02:57 crc kubenswrapper[4957]: I1128 21:02:57.651449 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-4r6v6" Nov 28 21:02:57 crc kubenswrapper[4957]: I1128 21:02:57.950411 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-6rjgr" Nov 28 21:02:58 crc kubenswrapper[4957]: I1128 21:02:58.080181 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-2cmml" Nov 28 21:02:58 crc kubenswrapper[4957]: I1128 21:02:58.793852 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 28 21:02:58 crc kubenswrapper[4957]: I1128 21:02:58.793916 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="3866c99e-7b87-4a00-9df5-b121467d603e" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 21:02:58 crc kubenswrapper[4957]: I1128 21:02:58.901338 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 28 21:02:58 crc kubenswrapper[4957]: I1128 21:02:58.974030 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 21:03:08 crc kubenswrapper[4957]: I1128 21:03:08.790587 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 28 21:03:08 crc kubenswrapper[4957]: I1128 21:03:08.791599 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="3866c99e-7b87-4a00-9df5-b121467d603e" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 21:03:08 crc kubenswrapper[4957]: I1128 21:03:08.993116 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:03:08 crc kubenswrapper[4957]: I1128 21:03:08.993243 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:03:18 crc kubenswrapper[4957]: I1128 21:03:18.791206 4957 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 28 21:03:18 crc kubenswrapper[4957]: I1128 21:03:18.791957 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="3866c99e-7b87-4a00-9df5-b121467d603e" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 21:03:28 crc kubenswrapper[4957]: I1128 21:03:28.792725 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 28 21:03:38 crc kubenswrapper[4957]: I1128 21:03:38.992966 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:03:38 crc kubenswrapper[4957]: I1128 21:03:38.993534 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:03:38 crc kubenswrapper[4957]: I1128 21:03:38.993577 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:03:38 crc kubenswrapper[4957]: I1128 21:03:38.994248 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bab2bbf40b4116f8715ef0abc775c76378c4f9b0a063bdb948a52f066fba5bb"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:03:38 crc kubenswrapper[4957]: I1128 21:03:38.994311 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://3bab2bbf40b4116f8715ef0abc775c76378c4f9b0a063bdb948a52f066fba5bb" gracePeriod=600 Nov 28 21:03:39 crc kubenswrapper[4957]: I1128 21:03:39.996288 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="3bab2bbf40b4116f8715ef0abc775c76378c4f9b0a063bdb948a52f066fba5bb" exitCode=0 Nov 28 21:03:39 crc kubenswrapper[4957]: I1128 21:03:39.996494 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"3bab2bbf40b4116f8715ef0abc775c76378c4f9b0a063bdb948a52f066fba5bb"} Nov 28 21:03:39 crc kubenswrapper[4957]: I1128 21:03:39.997750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"54c98ea802e0128e09dc3a1d110ba16d7362c23b697cdc49ba44b4359ff9c798"} Nov 28 21:03:39 crc kubenswrapper[4957]: I1128 21:03:39.997797 4957 scope.go:117] "RemoveContainer" containerID="c63f94cbbd2cbb566735f5bef0248b92c87139d3c7c4737482b0b18954789c4e" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.223436 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-5fs2s"] Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.225476 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.230908 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.231166 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-dtwbv" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.231469 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.231606 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5fs2s"] Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.232383 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.239341 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.245885 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.308810 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-5fs2s"] Nov 28 21:03:48 crc kubenswrapper[4957]: E1128 21:03:48.310176 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-57hbx metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-5fs2s" podUID="a7abcc9e-adce-465e-ac0c-effdb7baee7e" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387112 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-token\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387187 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config-openshift-service-cacrt\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387306 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-metrics\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387331 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-entrypoint\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387347 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-syslog-receiver\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387722 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hbx\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-kube-api-access-57hbx\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387860 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-trusted-ca\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387902 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7abcc9e-adce-465e-ac0c-effdb7baee7e-tmp\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.387951 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.388028 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a7abcc9e-adce-465e-ac0c-effdb7baee7e-datadir\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.388086 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-sa-token\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.490191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a7abcc9e-adce-465e-ac0c-effdb7baee7e-datadir\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.490271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-sa-token\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.490325 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-token\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.490431 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a7abcc9e-adce-465e-ac0c-effdb7baee7e-datadir\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.490585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config-openshift-service-cacrt\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491032 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-metrics\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491093 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-entrypoint\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-syslog-receiver\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57hbx\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-kube-api-access-57hbx\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491494 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7abcc9e-adce-465e-ac0c-effdb7baee7e-tmp\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491526 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-trusted-ca\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.491600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.492398 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config-openshift-service-cacrt\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.492532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-entrypoint\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.492890 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.493088 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-trusted-ca\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.497883 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-token\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.498249 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7abcc9e-adce-465e-ac0c-effdb7baee7e-tmp\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.506791 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-metrics\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.508797 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-syslog-receiver\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.510980 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57hbx\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-kube-api-access-57hbx\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:48 crc kubenswrapper[4957]: I1128 21:03:48.513108 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-sa-token\") pod \"collector-5fs2s\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " pod="openshift-logging/collector-5fs2s" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.099828 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5fs2s" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.130528 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5fs2s" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.307550 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-metrics\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308060 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57hbx\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-kube-api-access-57hbx\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308238 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a7abcc9e-adce-465e-ac0c-effdb7baee7e-datadir\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308300 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-entrypoint\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308367 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308398 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-trusted-ca\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308426 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7abcc9e-adce-465e-ac0c-effdb7baee7e-tmp\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308447 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7abcc9e-adce-465e-ac0c-effdb7baee7e-datadir" (OuterVolumeSpecName: "datadir") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308498 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-token\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308525 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config-openshift-service-cacrt\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308575 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-sa-token\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.308643 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-syslog-receiver\") pod \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\" (UID: \"a7abcc9e-adce-465e-ac0c-effdb7baee7e\") " Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309004 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309095 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config" (OuterVolumeSpecName: "config") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309190 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309193 4957 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a7abcc9e-adce-465e-ac0c-effdb7baee7e-datadir\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309276 4957 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309298 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.309527 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.314685 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.315176 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-kube-api-access-57hbx" (OuterVolumeSpecName: "kube-api-access-57hbx") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "kube-api-access-57hbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.315349 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-sa-token" (OuterVolumeSpecName: "sa-token") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.315504 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7abcc9e-adce-465e-ac0c-effdb7baee7e-tmp" (OuterVolumeSpecName: "tmp") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.317777 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-token" (OuterVolumeSpecName: "collector-token") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.318079 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-metrics" (OuterVolumeSpecName: "metrics") pod "a7abcc9e-adce-465e-ac0c-effdb7baee7e" (UID: "a7abcc9e-adce-465e-ac0c-effdb7baee7e"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.411941 4957 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-token\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412000 4957 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412021 4957 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412042 4957 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412063 4957 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a7abcc9e-adce-465e-ac0c-effdb7baee7e-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412081 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57hbx\" (UniqueName: \"kubernetes.io/projected/a7abcc9e-adce-465e-ac0c-effdb7baee7e-kube-api-access-57hbx\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412101 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7abcc9e-adce-465e-ac0c-effdb7baee7e-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:49 crc kubenswrapper[4957]: I1128 21:03:49.412118 4957 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7abcc9e-adce-465e-ac0c-effdb7baee7e-tmp\") on node \"crc\" DevicePath \"\"" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.108888 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5fs2s" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.200088 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-5fs2s"] Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.212851 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-5fs2s"] Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.231735 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-zvxsk"] Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.233401 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.236397 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.237412 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.238049 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-dtwbv" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.238840 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.239258 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.243676 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-zvxsk"] Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.247820 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.330105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p6b\" (UniqueName: \"kubernetes.io/projected/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-kube-api-access-t2p6b\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.330192 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-trusted-ca\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.330381 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-config-openshift-service-cacrt\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.330465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-metrics\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.330523 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-sa-token\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.331025 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-config\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.331196 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-collector-syslog-receiver\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.331315 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-datadir\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.331350 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-entrypoint\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.331385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-tmp\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.331437 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-collector-token\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433180 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-datadir\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-entrypoint\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433315 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-tmp\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-collector-token\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433384 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-datadir\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433430 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2p6b\" (UniqueName: \"kubernetes.io/projected/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-kube-api-access-t2p6b\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433481 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-trusted-ca\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433522 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-config-openshift-service-cacrt\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433613 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-metrics\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433665 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-sa-token\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433721 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-config\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.433774 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-collector-syslog-receiver\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.435262 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-config-openshift-service-cacrt\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.435408 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-entrypoint\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.435726 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-config\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.436280 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-trusted-ca\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.440947 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-metrics\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.441112 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-collector-token\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.441183 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-tmp\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.443113 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-collector-syslog-receiver\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.458740 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-sa-token\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.462910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2p6b\" (UniqueName: \"kubernetes.io/projected/827dd4f4-1c01-43ba-b1d8-d5c774a45d46-kube-api-access-t2p6b\") pod \"collector-zvxsk\" (UID: \"827dd4f4-1c01-43ba-b1d8-d5c774a45d46\") " pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.564368 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zvxsk" Nov 28 21:03:50 crc kubenswrapper[4957]: I1128 21:03:50.829589 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7abcc9e-adce-465e-ac0c-effdb7baee7e" path="/var/lib/kubelet/pods/a7abcc9e-adce-465e-ac0c-effdb7baee7e/volumes" Nov 28 21:03:51 crc kubenswrapper[4957]: I1128 21:03:51.066569 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-zvxsk"] Nov 28 21:03:51 crc kubenswrapper[4957]: I1128 21:03:51.120052 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-zvxsk" event={"ID":"827dd4f4-1c01-43ba-b1d8-d5c774a45d46","Type":"ContainerStarted","Data":"3724a3935a8fba71368ea70948ecadd2346932b04c84e25369531dc151864184"} Nov 28 21:03:59 crc kubenswrapper[4957]: I1128 21:03:59.183565 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-zvxsk" event={"ID":"827dd4f4-1c01-43ba-b1d8-d5c774a45d46","Type":"ContainerStarted","Data":"13421a0df95316fa2b209faed9c0cd69918c20c0e4963b7335323e7c21522751"} Nov 28 21:03:59 crc kubenswrapper[4957]: I1128 21:03:59.218564 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-zvxsk" podStartSLOduration=2.098470616 podStartE2EDuration="9.218543933s" podCreationTimestamp="2025-11-28 21:03:50 +0000 UTC" firstStartedPulling="2025-11-28 21:03:51.073934337 +0000 UTC m=+870.542582266" lastFinishedPulling="2025-11-28 21:03:58.194007674 +0000 UTC m=+877.662655583" observedRunningTime="2025-11-28 21:03:59.212523943 +0000 UTC m=+878.681171852" watchObservedRunningTime="2025-11-28 21:03:59.218543933 +0000 UTC m=+878.687191852" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.546505 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jj8vl"] Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.548608 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.562471 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jj8vl"] Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.619528 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-catalog-content\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.619745 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-utilities\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.619817 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rwb\" (UniqueName: \"kubernetes.io/projected/f71747ec-b2e9-431a-a987-426ee090517c-kube-api-access-h8rwb\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.721506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-utilities\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.721557 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rwb\" (UniqueName: \"kubernetes.io/projected/f71747ec-b2e9-431a-a987-426ee090517c-kube-api-access-h8rwb\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.721614 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-catalog-content\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.721956 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-utilities\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.722017 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-catalog-content\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.748360 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rwb\" (UniqueName: \"kubernetes.io/projected/f71747ec-b2e9-431a-a987-426ee090517c-kube-api-access-h8rwb\") pod \"community-operators-jj8vl\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:01 crc kubenswrapper[4957]: I1128 21:04:01.866364 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:02 crc kubenswrapper[4957]: I1128 21:04:02.178602 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jj8vl"] Nov 28 21:04:02 crc kubenswrapper[4957]: W1128 21:04:02.182184 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf71747ec_b2e9_431a_a987_426ee090517c.slice/crio-3e388cc983b295b05c435526c7a64a33f5b929cb963489e7bf5f1032de8f28bc WatchSource:0}: Error finding container 3e388cc983b295b05c435526c7a64a33f5b929cb963489e7bf5f1032de8f28bc: Status 404 returned error can't find the container with id 3e388cc983b295b05c435526c7a64a33f5b929cb963489e7bf5f1032de8f28bc Nov 28 21:04:02 crc kubenswrapper[4957]: I1128 21:04:02.212293 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerStarted","Data":"3e388cc983b295b05c435526c7a64a33f5b929cb963489e7bf5f1032de8f28bc"} Nov 28 21:04:03 crc kubenswrapper[4957]: I1128 21:04:03.221734 4957 generic.go:334] "Generic (PLEG): container finished" podID="f71747ec-b2e9-431a-a987-426ee090517c" containerID="b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c" exitCode=0 Nov 28 21:04:03 crc kubenswrapper[4957]: I1128 21:04:03.221870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerDied","Data":"b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c"} Nov 28 21:04:04 crc kubenswrapper[4957]: I1128 21:04:04.231108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerStarted","Data":"a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da"} Nov 28 21:04:05 crc kubenswrapper[4957]: I1128 21:04:05.240791 4957 generic.go:334] "Generic (PLEG): container finished" podID="f71747ec-b2e9-431a-a987-426ee090517c" containerID="a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da" exitCode=0 Nov 28 21:04:05 crc kubenswrapper[4957]: I1128 21:04:05.240861 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerDied","Data":"a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da"} Nov 28 21:04:06 crc kubenswrapper[4957]: I1128 21:04:06.255107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerStarted","Data":"a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da"} Nov 28 21:04:06 crc kubenswrapper[4957]: I1128 21:04:06.282944 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jj8vl" podStartSLOduration=2.819567889 podStartE2EDuration="5.282925987s" podCreationTimestamp="2025-11-28 21:04:01 +0000 UTC" firstStartedPulling="2025-11-28 21:04:03.224178289 +0000 UTC m=+882.692826198" lastFinishedPulling="2025-11-28 21:04:05.687536377 +0000 UTC m=+885.156184296" observedRunningTime="2025-11-28 21:04:06.278334743 +0000 UTC m=+885.746982752" watchObservedRunningTime="2025-11-28 21:04:06.282925987 +0000 UTC m=+885.751573896" Nov 28 21:04:11 crc kubenswrapper[4957]: I1128 21:04:11.867531 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:11 crc kubenswrapper[4957]: I1128 21:04:11.868429 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:11 crc kubenswrapper[4957]: I1128 21:04:11.946659 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:12 crc kubenswrapper[4957]: I1128 21:04:12.347964 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:12 crc kubenswrapper[4957]: I1128 21:04:12.394872 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jj8vl"] Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.320833 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jj8vl" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="registry-server" containerID="cri-o://a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da" gracePeriod=2 Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.809581 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.959280 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-utilities\") pod \"f71747ec-b2e9-431a-a987-426ee090517c\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.959379 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-catalog-content\") pod \"f71747ec-b2e9-431a-a987-426ee090517c\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.959500 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8rwb\" (UniqueName: \"kubernetes.io/projected/f71747ec-b2e9-431a-a987-426ee090517c-kube-api-access-h8rwb\") pod \"f71747ec-b2e9-431a-a987-426ee090517c\" (UID: \"f71747ec-b2e9-431a-a987-426ee090517c\") " Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.960490 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-utilities" (OuterVolumeSpecName: "utilities") pod "f71747ec-b2e9-431a-a987-426ee090517c" (UID: "f71747ec-b2e9-431a-a987-426ee090517c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:14 crc kubenswrapper[4957]: I1128 21:04:14.972927 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71747ec-b2e9-431a-a987-426ee090517c-kube-api-access-h8rwb" (OuterVolumeSpecName: "kube-api-access-h8rwb") pod "f71747ec-b2e9-431a-a987-426ee090517c" (UID: "f71747ec-b2e9-431a-a987-426ee090517c"). InnerVolumeSpecName "kube-api-access-h8rwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.061582 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8rwb\" (UniqueName: \"kubernetes.io/projected/f71747ec-b2e9-431a-a987-426ee090517c-kube-api-access-h8rwb\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.061852 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.082933 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f71747ec-b2e9-431a-a987-426ee090517c" (UID: "f71747ec-b2e9-431a-a987-426ee090517c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.163925 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71747ec-b2e9-431a-a987-426ee090517c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.333290 4957 generic.go:334] "Generic (PLEG): container finished" podID="f71747ec-b2e9-431a-a987-426ee090517c" containerID="a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da" exitCode=0 Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.333358 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerDied","Data":"a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da"} Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.333373 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jj8vl" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.333403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jj8vl" event={"ID":"f71747ec-b2e9-431a-a987-426ee090517c","Type":"ContainerDied","Data":"3e388cc983b295b05c435526c7a64a33f5b929cb963489e7bf5f1032de8f28bc"} Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.333431 4957 scope.go:117] "RemoveContainer" containerID="a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.363252 4957 scope.go:117] "RemoveContainer" containerID="a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.369730 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jj8vl"] Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.382386 4957 scope.go:117] "RemoveContainer" containerID="b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.382565 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jj8vl"] Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.404507 4957 scope.go:117] "RemoveContainer" containerID="a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da" Nov 28 21:04:15 crc kubenswrapper[4957]: E1128 21:04:15.404954 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da\": container with ID starting with a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da not found: ID does not exist" containerID="a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.405006 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da"} err="failed to get container status \"a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da\": rpc error: code = NotFound desc = could not find container \"a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da\": container with ID starting with a497bee13b0badd24cd6badee6e02c8fbc67dd86bcd376c51c561c9e125d39da not found: ID does not exist" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.405039 4957 scope.go:117] "RemoveContainer" containerID="a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da" Nov 28 21:04:15 crc kubenswrapper[4957]: E1128 21:04:15.405759 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da\": container with ID starting with a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da not found: ID does not exist" containerID="a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.405797 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da"} err="failed to get container status \"a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da\": rpc error: code = NotFound desc = could not find container \"a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da\": container with ID starting with a4aa59661e754220d9e5bd8dd96ae37d915c751e1bf05552833aabe61697d2da not found: ID does not exist" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.405823 4957 scope.go:117] "RemoveContainer" containerID="b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c" Nov 28 21:04:15 crc kubenswrapper[4957]: E1128 21:04:15.406053 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c\": container with ID starting with b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c not found: ID does not exist" containerID="b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c" Nov 28 21:04:15 crc kubenswrapper[4957]: I1128 21:04:15.406173 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c"} err="failed to get container status \"b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c\": rpc error: code = NotFound desc = could not find container \"b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c\": container with ID starting with b946ef689c7fb79346d8853f7ef13975c61e227f32b7a56ebae056dbbc13e91c not found: ID does not exist" Nov 28 21:04:16 crc kubenswrapper[4957]: I1128 21:04:16.823088 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71747ec-b2e9-431a-a987-426ee090517c" path="/var/lib/kubelet/pods/f71747ec-b2e9-431a-a987-426ee090517c/volumes" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.734179 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c"] Nov 28 21:04:26 crc kubenswrapper[4957]: E1128 21:04:26.734911 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="extract-content" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.734922 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="extract-content" Nov 28 21:04:26 crc kubenswrapper[4957]: E1128 21:04:26.734933 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="extract-utilities" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.734938 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="extract-utilities" Nov 28 21:04:26 crc kubenswrapper[4957]: E1128 21:04:26.734953 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="registry-server" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.734959 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="registry-server" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.735087 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71747ec-b2e9-431a-a987-426ee090517c" containerName="registry-server" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.735963 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.737592 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.749487 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c"] Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.758363 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5qb\" (UniqueName: \"kubernetes.io/projected/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-kube-api-access-pt5qb\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.758594 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.758738 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.860709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5qb\" (UniqueName: \"kubernetes.io/projected/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-kube-api-access-pt5qb\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.861045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.861179 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.861763 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.861891 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:26 crc kubenswrapper[4957]: I1128 21:04:26.889898 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5qb\" (UniqueName: \"kubernetes.io/projected/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-kube-api-access-pt5qb\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:27 crc kubenswrapper[4957]: I1128 21:04:27.051367 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:27 crc kubenswrapper[4957]: I1128 21:04:27.450847 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c"] Nov 28 21:04:28 crc kubenswrapper[4957]: I1128 21:04:28.451575 4957 generic.go:334] "Generic (PLEG): container finished" podID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerID="5bb78addadb41ded41dd668134d3a9e821efe5b6148423a4231a8e772ffcbe8a" exitCode=0 Nov 28 21:04:28 crc kubenswrapper[4957]: I1128 21:04:28.451874 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" event={"ID":"ffa7bb6e-8f47-46e9-92e6-0669f49584f9","Type":"ContainerDied","Data":"5bb78addadb41ded41dd668134d3a9e821efe5b6148423a4231a8e772ffcbe8a"} Nov 28 21:04:28 crc kubenswrapper[4957]: I1128 21:04:28.451903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" event={"ID":"ffa7bb6e-8f47-46e9-92e6-0669f49584f9","Type":"ContainerStarted","Data":"6e78fa0b2ba1f8174f6fbd3f9d4bd1351b21cf58946734dff23eefd0556ad1ee"} Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.105101 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tn9g5"] Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.110693 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.112157 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn9g5"] Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.205932 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-catalog-content\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.205987 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-utilities\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.206153 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lhr\" (UniqueName: \"kubernetes.io/projected/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-kube-api-access-z9lhr\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.307583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lhr\" (UniqueName: \"kubernetes.io/projected/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-kube-api-access-z9lhr\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.307657 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-catalog-content\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.307692 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-utilities\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.308229 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-utilities\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.308291 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-catalog-content\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.329421 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lhr\" (UniqueName: \"kubernetes.io/projected/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-kube-api-access-z9lhr\") pod \"redhat-operators-tn9g5\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.467796 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:29 crc kubenswrapper[4957]: I1128 21:04:29.876280 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn9g5"] Nov 28 21:04:29 crc kubenswrapper[4957]: W1128 21:04:29.883653 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c429f5_fb2f_4d55_a312_e3a2e71ef475.slice/crio-552b288222d1537886e0f62aa02598be0894fe73ef7488367002a50cad18808c WatchSource:0}: Error finding container 552b288222d1537886e0f62aa02598be0894fe73ef7488367002a50cad18808c: Status 404 returned error can't find the container with id 552b288222d1537886e0f62aa02598be0894fe73ef7488367002a50cad18808c Nov 28 21:04:30 crc kubenswrapper[4957]: I1128 21:04:30.465705 4957 generic.go:334] "Generic (PLEG): container finished" podID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerID="69f2f7ec26040a3c70d1a965eae418f3c94167a9c668460664a8a6cbe3175d24" exitCode=0 Nov 28 21:04:30 crc kubenswrapper[4957]: I1128 21:04:30.465765 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerDied","Data":"69f2f7ec26040a3c70d1a965eae418f3c94167a9c668460664a8a6cbe3175d24"} Nov 28 21:04:30 crc kubenswrapper[4957]: I1128 21:04:30.466152 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerStarted","Data":"552b288222d1537886e0f62aa02598be0894fe73ef7488367002a50cad18808c"} Nov 28 21:04:30 crc kubenswrapper[4957]: I1128 21:04:30.469911 4957 generic.go:334] "Generic (PLEG): container finished" podID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerID="ecf0e709b75d39acc211b04ba432f62229100a14b7c0e963f2312fee69f5ca8b" exitCode=0 Nov 28 21:04:30 crc kubenswrapper[4957]: I1128 21:04:30.470004 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" event={"ID":"ffa7bb6e-8f47-46e9-92e6-0669f49584f9","Type":"ContainerDied","Data":"ecf0e709b75d39acc211b04ba432f62229100a14b7c0e963f2312fee69f5ca8b"} Nov 28 21:04:31 crc kubenswrapper[4957]: I1128 21:04:31.477095 4957 generic.go:334] "Generic (PLEG): container finished" podID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerID="1a7650006f279d2b83e004b5adc05bbebd96e69d11f1bc0c5eb2400c52febf04" exitCode=0 Nov 28 21:04:31 crc kubenswrapper[4957]: I1128 21:04:31.477129 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" event={"ID":"ffa7bb6e-8f47-46e9-92e6-0669f49584f9","Type":"ContainerDied","Data":"1a7650006f279d2b83e004b5adc05bbebd96e69d11f1bc0c5eb2400c52febf04"} Nov 28 21:04:31 crc kubenswrapper[4957]: I1128 21:04:31.478754 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerStarted","Data":"f31a9d0c924d429bd1393c9baff66d6731b994f7c6651df8c50fd490d5af0a3c"} Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.487251 4957 generic.go:334] "Generic (PLEG): container finished" podID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerID="f31a9d0c924d429bd1393c9baff66d6731b994f7c6651df8c50fd490d5af0a3c" exitCode=0 Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.487349 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerDied","Data":"f31a9d0c924d429bd1393c9baff66d6731b994f7c6651df8c50fd490d5af0a3c"} Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.824382 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.977752 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-bundle\") pod \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.978333 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt5qb\" (UniqueName: \"kubernetes.io/projected/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-kube-api-access-pt5qb\") pod \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.978389 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-util\") pod \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\" (UID: \"ffa7bb6e-8f47-46e9-92e6-0669f49584f9\") " Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.979374 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-bundle" (OuterVolumeSpecName: "bundle") pod "ffa7bb6e-8f47-46e9-92e6-0669f49584f9" (UID: "ffa7bb6e-8f47-46e9-92e6-0669f49584f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.984442 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-kube-api-access-pt5qb" (OuterVolumeSpecName: "kube-api-access-pt5qb") pod "ffa7bb6e-8f47-46e9-92e6-0669f49584f9" (UID: "ffa7bb6e-8f47-46e9-92e6-0669f49584f9"). InnerVolumeSpecName "kube-api-access-pt5qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:04:32 crc kubenswrapper[4957]: I1128 21:04:32.991235 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-util" (OuterVolumeSpecName: "util") pod "ffa7bb6e-8f47-46e9-92e6-0669f49584f9" (UID: "ffa7bb6e-8f47-46e9-92e6-0669f49584f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.079724 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.079764 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt5qb\" (UniqueName: \"kubernetes.io/projected/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-kube-api-access-pt5qb\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.079776 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffa7bb6e-8f47-46e9-92e6-0669f49584f9-util\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.495524 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" event={"ID":"ffa7bb6e-8f47-46e9-92e6-0669f49584f9","Type":"ContainerDied","Data":"6e78fa0b2ba1f8174f6fbd3f9d4bd1351b21cf58946734dff23eefd0556ad1ee"} Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.495616 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e78fa0b2ba1f8174f6fbd3f9d4bd1351b21cf58946734dff23eefd0556ad1ee" Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.495718 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c" Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.499799 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerStarted","Data":"55630af8daf3dbf926ae4cf61cbed9b4e2a298c6de8d18c14be61dd0e907562b"} Nov 28 21:04:33 crc kubenswrapper[4957]: I1128 21:04:33.528402 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tn9g5" podStartSLOduration=1.675305645 podStartE2EDuration="4.528368273s" podCreationTimestamp="2025-11-28 21:04:29 +0000 UTC" firstStartedPulling="2025-11-28 21:04:30.468136739 +0000 UTC m=+909.936784648" lastFinishedPulling="2025-11-28 21:04:33.321199367 +0000 UTC m=+912.789847276" observedRunningTime="2025-11-28 21:04:33.518259552 +0000 UTC m=+912.986907471" watchObservedRunningTime="2025-11-28 21:04:33.528368273 +0000 UTC m=+912.997016182" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.308125 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dm94h"] Nov 28 21:04:37 crc kubenswrapper[4957]: E1128 21:04:37.309248 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="util" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.309265 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="util" Nov 28 21:04:37 crc kubenswrapper[4957]: E1128 21:04:37.309286 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="pull" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.309292 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="pull" Nov 28 21:04:37 crc kubenswrapper[4957]: E1128 21:04:37.309324 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="extract" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.309331 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="extract" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.309465 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa7bb6e-8f47-46e9-92e6-0669f49584f9" containerName="extract" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.310717 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.325997 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm94h"] Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.373630 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tq7k\" (UniqueName: \"kubernetes.io/projected/552ddd5f-63c6-497a-a211-01b6ec437a5a-kube-api-access-2tq7k\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.373682 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-catalog-content\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.373871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-utilities\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.475995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tq7k\" (UniqueName: \"kubernetes.io/projected/552ddd5f-63c6-497a-a211-01b6ec437a5a-kube-api-access-2tq7k\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.476072 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-catalog-content\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.476102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-utilities\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.476654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-catalog-content\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.476777 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-utilities\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.500300 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tq7k\" (UniqueName: \"kubernetes.io/projected/552ddd5f-63c6-497a-a211-01b6ec437a5a-kube-api-access-2tq7k\") pod \"redhat-marketplace-dm94h\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:37 crc kubenswrapper[4957]: I1128 21:04:37.628710 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.041464 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw"] Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.042647 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.045082 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.045272 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.045459 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jlnxs" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.057091 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw"] Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.086478 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn5gx\" (UniqueName: \"kubernetes.io/projected/66d17f19-66f9-41c9-9566-fca688da8506-kube-api-access-xn5gx\") pod \"nmstate-operator-5b5b58f5c8-fzfdw\" (UID: \"66d17f19-66f9-41c9-9566-fca688da8506\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.153834 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm94h"] Nov 28 21:04:38 crc kubenswrapper[4957]: W1128 21:04:38.159449 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552ddd5f_63c6_497a_a211_01b6ec437a5a.slice/crio-3acbc40ca12e7589669b56e11e65f6e25498e2c66beaec5a16cd030b56a875a0 WatchSource:0}: Error finding container 3acbc40ca12e7589669b56e11e65f6e25498e2c66beaec5a16cd030b56a875a0: Status 404 returned error can't find the container with id 3acbc40ca12e7589669b56e11e65f6e25498e2c66beaec5a16cd030b56a875a0 Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.188349 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn5gx\" (UniqueName: \"kubernetes.io/projected/66d17f19-66f9-41c9-9566-fca688da8506-kube-api-access-xn5gx\") pod \"nmstate-operator-5b5b58f5c8-fzfdw\" (UID: \"66d17f19-66f9-41c9-9566-fca688da8506\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.215954 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn5gx\" (UniqueName: \"kubernetes.io/projected/66d17f19-66f9-41c9-9566-fca688da8506-kube-api-access-xn5gx\") pod \"nmstate-operator-5b5b58f5c8-fzfdw\" (UID: \"66d17f19-66f9-41c9-9566-fca688da8506\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.364316 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.541700 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerStarted","Data":"3acbc40ca12e7589669b56e11e65f6e25498e2c66beaec5a16cd030b56a875a0"} Nov 28 21:04:38 crc kubenswrapper[4957]: I1128 21:04:38.841217 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw"] Nov 28 21:04:38 crc kubenswrapper[4957]: W1128 21:04:38.863460 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66d17f19_66f9_41c9_9566_fca688da8506.slice/crio-8a0f95e7894381335f84517a0415d4986bd76039cafdce2e1ad4cdadf95c5122 WatchSource:0}: Error finding container 8a0f95e7894381335f84517a0415d4986bd76039cafdce2e1ad4cdadf95c5122: Status 404 returned error can't find the container with id 8a0f95e7894381335f84517a0415d4986bd76039cafdce2e1ad4cdadf95c5122 Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.468716 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.469008 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.506045 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.551435 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" event={"ID":"66d17f19-66f9-41c9-9566-fca688da8506","Type":"ContainerStarted","Data":"8a0f95e7894381335f84517a0415d4986bd76039cafdce2e1ad4cdadf95c5122"} Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.553293 4957 generic.go:334] "Generic (PLEG): container finished" podID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerID="14d29061a22c020234a46ab4c7f4e7e292e2b6b693aa029bd7e5bf3c794fe107" exitCode=0 Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.554530 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerDied","Data":"14d29061a22c020234a46ab4c7f4e7e292e2b6b693aa029bd7e5bf3c794fe107"} Nov 28 21:04:39 crc kubenswrapper[4957]: I1128 21:04:39.602027 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:41 crc kubenswrapper[4957]: I1128 21:04:41.574944 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" event={"ID":"66d17f19-66f9-41c9-9566-fca688da8506","Type":"ContainerStarted","Data":"ee926a4847f90980c4fcf1a88e77baf5db098e5650f6105c5de81e5e7f5c0612"} Nov 28 21:04:41 crc kubenswrapper[4957]: I1128 21:04:41.577440 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerStarted","Data":"c1b135300f3f221c143376b097560ca350997ebad7659075abd3ec27063c431c"} Nov 28 21:04:41 crc kubenswrapper[4957]: I1128 21:04:41.591864 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fzfdw" podStartSLOduration=1.2842498039999999 podStartE2EDuration="3.591833363s" podCreationTimestamp="2025-11-28 21:04:38 +0000 UTC" firstStartedPulling="2025-11-28 21:04:38.866030945 +0000 UTC m=+918.334678864" lastFinishedPulling="2025-11-28 21:04:41.173614504 +0000 UTC m=+920.642262423" observedRunningTime="2025-11-28 21:04:41.591359421 +0000 UTC m=+921.060007350" watchObservedRunningTime="2025-11-28 21:04:41.591833363 +0000 UTC m=+921.060481302" Nov 28 21:04:42 crc kubenswrapper[4957]: I1128 21:04:42.588861 4957 generic.go:334] "Generic (PLEG): container finished" podID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerID="c1b135300f3f221c143376b097560ca350997ebad7659075abd3ec27063c431c" exitCode=0 Nov 28 21:04:42 crc kubenswrapper[4957]: I1128 21:04:42.588976 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerDied","Data":"c1b135300f3f221c143376b097560ca350997ebad7659075abd3ec27063c431c"} Nov 28 21:04:43 crc kubenswrapper[4957]: I1128 21:04:43.291587 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn9g5"] Nov 28 21:04:43 crc kubenswrapper[4957]: I1128 21:04:43.291797 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tn9g5" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="registry-server" containerID="cri-o://55630af8daf3dbf926ae4cf61cbed9b4e2a298c6de8d18c14be61dd0e907562b" gracePeriod=2 Nov 28 21:04:43 crc kubenswrapper[4957]: I1128 21:04:43.611910 4957 generic.go:334] "Generic (PLEG): container finished" podID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerID="55630af8daf3dbf926ae4cf61cbed9b4e2a298c6de8d18c14be61dd0e907562b" exitCode=0 Nov 28 21:04:43 crc kubenswrapper[4957]: I1128 21:04:43.612091 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerDied","Data":"55630af8daf3dbf926ae4cf61cbed9b4e2a298c6de8d18c14be61dd0e907562b"} Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.174565 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.296829 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-utilities\") pod \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.296949 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9lhr\" (UniqueName: \"kubernetes.io/projected/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-kube-api-access-z9lhr\") pod \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.296983 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-catalog-content\") pod \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\" (UID: \"d2c429f5-fb2f-4d55-a312-e3a2e71ef475\") " Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.297816 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-utilities" (OuterVolumeSpecName: "utilities") pod "d2c429f5-fb2f-4d55-a312-e3a2e71ef475" (UID: "d2c429f5-fb2f-4d55-a312-e3a2e71ef475"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.311543 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-kube-api-access-z9lhr" (OuterVolumeSpecName: "kube-api-access-z9lhr") pod "d2c429f5-fb2f-4d55-a312-e3a2e71ef475" (UID: "d2c429f5-fb2f-4d55-a312-e3a2e71ef475"). InnerVolumeSpecName "kube-api-access-z9lhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.395407 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2c429f5-fb2f-4d55-a312-e3a2e71ef475" (UID: "d2c429f5-fb2f-4d55-a312-e3a2e71ef475"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.398290 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.398323 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9lhr\" (UniqueName: \"kubernetes.io/projected/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-kube-api-access-z9lhr\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.398336 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c429f5-fb2f-4d55-a312-e3a2e71ef475-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.621928 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9g5" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.621936 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9g5" event={"ID":"d2c429f5-fb2f-4d55-a312-e3a2e71ef475","Type":"ContainerDied","Data":"552b288222d1537886e0f62aa02598be0894fe73ef7488367002a50cad18808c"} Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.622076 4957 scope.go:117] "RemoveContainer" containerID="55630af8daf3dbf926ae4cf61cbed9b4e2a298c6de8d18c14be61dd0e907562b" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.625961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerStarted","Data":"a2bd14e6a2465b8c56221d64419eec07ad2bbae8bb8991941368ebda93cf8bfc"} Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.639879 4957 scope.go:117] "RemoveContainer" containerID="f31a9d0c924d429bd1393c9baff66d6731b994f7c6651df8c50fd490d5af0a3c" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.645700 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dm94h" podStartSLOduration=3.64844848 podStartE2EDuration="7.645684588s" podCreationTimestamp="2025-11-28 21:04:37 +0000 UTC" firstStartedPulling="2025-11-28 21:04:39.555726447 +0000 UTC m=+919.024374356" lastFinishedPulling="2025-11-28 21:04:43.552962555 +0000 UTC m=+923.021610464" observedRunningTime="2025-11-28 21:04:44.642839977 +0000 UTC m=+924.111487896" watchObservedRunningTime="2025-11-28 21:04:44.645684588 +0000 UTC m=+924.114332497" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.658564 4957 scope.go:117] "RemoveContainer" containerID="69f2f7ec26040a3c70d1a965eae418f3c94167a9c668460664a8a6cbe3175d24" Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.727512 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn9g5"] Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.727593 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tn9g5"] Nov 28 21:04:44 crc kubenswrapper[4957]: I1128 21:04:44.823525 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" path="/var/lib/kubelet/pods/d2c429f5-fb2f-4d55-a312-e3a2e71ef475/volumes" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.629026 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.629569 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.682697 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.884894 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8"] Nov 28 21:04:47 crc kubenswrapper[4957]: E1128 21:04:47.885156 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="extract-content" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.885171 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="extract-content" Nov 28 21:04:47 crc kubenswrapper[4957]: E1128 21:04:47.885190 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="registry-server" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.885196 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="registry-server" Nov 28 21:04:47 crc kubenswrapper[4957]: E1128 21:04:47.885227 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="extract-utilities" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.885234 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="extract-utilities" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.885342 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c429f5-fb2f-4d55-a312-e3a2e71ef475" containerName="registry-server" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.886078 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.888566 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8gnm5" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.896923 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr"] Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.897890 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.900042 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.902034 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8"] Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.907181 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4fhg7"] Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.908802 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7g5s\" (UniqueName: \"kubernetes.io/projected/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-kube-api-access-c7g5s\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960201 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960350 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5mr\" (UniqueName: \"kubernetes.io/projected/b878fdac-b49b-40d8-b0cb-af5d44f21f9d-kube-api-access-jr5mr\") pod \"nmstate-metrics-7f946cbc9-8gmh8\" (UID: \"b878fdac-b49b-40d8-b0cb-af5d44f21f9d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960405 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-nmstate-lock\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960545 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-dbus-socket\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960595 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf86d\" (UniqueName: \"kubernetes.io/projected/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-kube-api-access-lf86d\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960645 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-ovs-socket\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:47 crc kubenswrapper[4957]: I1128 21:04:47.960811 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.062388 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.062763 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5mr\" (UniqueName: \"kubernetes.io/projected/b878fdac-b49b-40d8-b0cb-af5d44f21f9d-kube-api-access-jr5mr\") pod \"nmstate-metrics-7f946cbc9-8gmh8\" (UID: \"b878fdac-b49b-40d8-b0cb-af5d44f21f9d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.062787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-nmstate-lock\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: E1128 21:04:48.062604 4957 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.062844 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-dbus-socket\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: E1128 21:04:48.062888 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-tls-key-pair podName:9234d52f-6818-4ccf-ac79-4d5e4f3cce21 nodeName:}" failed. No retries permitted until 2025-11-28 21:04:48.562864247 +0000 UTC m=+928.031512226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-t9nrr" (UID: "9234d52f-6818-4ccf-ac79-4d5e4f3cce21") : secret "openshift-nmstate-webhook" not found Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.062921 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf86d\" (UniqueName: \"kubernetes.io/projected/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-kube-api-access-lf86d\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.062945 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-nmstate-lock\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.063022 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-ovs-socket\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.063079 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-dbus-socket\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.063109 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7g5s\" (UniqueName: \"kubernetes.io/projected/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-kube-api-access-c7g5s\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.063131 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-ovs-socket\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.073830 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.074843 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.080730 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.080837 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dpnk2" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.080968 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.093974 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.097096 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7g5s\" (UniqueName: \"kubernetes.io/projected/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-kube-api-access-c7g5s\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.113106 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5mr\" (UniqueName: \"kubernetes.io/projected/b878fdac-b49b-40d8-b0cb-af5d44f21f9d-kube-api-access-jr5mr\") pod \"nmstate-metrics-7f946cbc9-8gmh8\" (UID: \"b878fdac-b49b-40d8-b0cb-af5d44f21f9d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.129854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf86d\" (UniqueName: \"kubernetes.io/projected/f9d6e935-f7a3-4a37-8d21-5bb73ef04186-kube-api-access-lf86d\") pod \"nmstate-handler-4fhg7\" (UID: \"f9d6e935-f7a3-4a37-8d21-5bb73ef04186\") " pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.140956 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ndqzm"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.150932 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndqzm"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.151044 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.165560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c781e919-f546-4bd7-b564-cd424540268c-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.165628 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ftb\" (UniqueName: \"kubernetes.io/projected/c781e919-f546-4bd7-b564-cd424540268c-kube-api-access-56ftb\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.165798 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c781e919-f546-4bd7-b564-cd424540268c-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.203548 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.250642 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.268063 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-catalog-content\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.268133 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c781e919-f546-4bd7-b564-cd424540268c-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.268166 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ftb\" (UniqueName: \"kubernetes.io/projected/c781e919-f546-4bd7-b564-cd424540268c-kube-api-access-56ftb\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.268202 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-utilities\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.268287 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzf2\" (UniqueName: \"kubernetes.io/projected/190452fe-9708-4ff1-9aeb-c68069e838d3-kube-api-access-6lzf2\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.268308 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c781e919-f546-4bd7-b564-cd424540268c-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.269514 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c781e919-f546-4bd7-b564-cd424540268c-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: E1128 21:04:48.269812 4957 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 28 21:04:48 crc kubenswrapper[4957]: E1128 21:04:48.269853 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c781e919-f546-4bd7-b564-cd424540268c-plugin-serving-cert podName:c781e919-f546-4bd7-b564-cd424540268c nodeName:}" failed. No retries permitted until 2025-11-28 21:04:48.769841498 +0000 UTC m=+928.238489397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c781e919-f546-4bd7-b564-cd424540268c-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-8fhk9" (UID: "c781e919-f546-4bd7-b564-cd424540268c") : secret "plugin-serving-cert" not found Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.297947 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-ddc747d85-tg9cf"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.298858 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.307187 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ftb\" (UniqueName: \"kubernetes.io/projected/c781e919-f546-4bd7-b564-cd424540268c-kube-api-access-56ftb\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.321415 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ddc747d85-tg9cf"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376257 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-oauth-serving-cert\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376368 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-utilities\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376427 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-serving-cert\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376457 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-service-ca\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376481 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-trusted-ca-bundle\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376517 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-oauth-config\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lzf2\" (UniqueName: \"kubernetes.io/projected/190452fe-9708-4ff1-9aeb-c68069e838d3-kube-api-access-6lzf2\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376575 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b22b\" (UniqueName: \"kubernetes.io/projected/de78648e-7956-4dc9-8905-28dc0700ff40-kube-api-access-4b22b\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376612 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-console-config\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.376638 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-catalog-content\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.377082 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-catalog-content\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.380092 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-utilities\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.406002 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lzf2\" (UniqueName: \"kubernetes.io/projected/190452fe-9708-4ff1-9aeb-c68069e838d3-kube-api-access-6lzf2\") pod \"certified-operators-ndqzm\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-oauth-config\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478157 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b22b\" (UniqueName: \"kubernetes.io/projected/de78648e-7956-4dc9-8905-28dc0700ff40-kube-api-access-4b22b\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-console-config\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478263 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-oauth-serving-cert\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478342 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-serving-cert\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-service-ca\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.478382 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-trusted-ca-bundle\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.479357 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-trusted-ca-bundle\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.479867 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-oauth-serving-cert\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.480089 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-service-ca\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.480281 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-console-config\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.482533 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-serving-cert\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.485265 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-oauth-config\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.503120 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b22b\" (UniqueName: \"kubernetes.io/projected/de78648e-7956-4dc9-8905-28dc0700ff40-kube-api-access-4b22b\") pod \"console-ddc747d85-tg9cf\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.504912 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.579687 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.583381 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9234d52f-6818-4ccf-ac79-4d5e4f3cce21-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-t9nrr\" (UID: \"9234d52f-6818-4ccf-ac79-4d5e4f3cce21\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.655437 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4fhg7" event={"ID":"f9d6e935-f7a3-4a37-8d21-5bb73ef04186","Type":"ContainerStarted","Data":"bccce4cd13f6103f0d6ca580486adbf423cac4bc7a9f36242ddc06954e767163"} Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.677245 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.780181 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8"] Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.785089 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c781e919-f546-4bd7-b564-cd424540268c-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.798804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c781e919-f546-4bd7-b564-cd424540268c-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-8fhk9\" (UID: \"c781e919-f546-4bd7-b564-cd424540268c\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:48 crc kubenswrapper[4957]: I1128 21:04:48.827884 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.011595 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndqzm"] Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.077543 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.188265 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr"] Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.239708 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ddc747d85-tg9cf"] Nov 28 21:04:49 crc kubenswrapper[4957]: W1128 21:04:49.281161 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde78648e_7956_4dc9_8905_28dc0700ff40.slice/crio-95537fd95b8df43437f7af7f455a7165cf440491757e5d944aa92cb62a411533 WatchSource:0}: Error finding container 95537fd95b8df43437f7af7f455a7165cf440491757e5d944aa92cb62a411533: Status 404 returned error can't find the container with id 95537fd95b8df43437f7af7f455a7165cf440491757e5d944aa92cb62a411533 Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.663495 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" event={"ID":"9234d52f-6818-4ccf-ac79-4d5e4f3cce21","Type":"ContainerStarted","Data":"4c90c48881ef2af1fee87bf23687ef640b7443c5a89158895996ad7993c118e6"} Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.667021 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ddc747d85-tg9cf" event={"ID":"de78648e-7956-4dc9-8905-28dc0700ff40","Type":"ContainerStarted","Data":"bc68ed8834f2de1a617e326c198e646610a1e1928ad1ccb42cfba54e75218653"} Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.667185 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ddc747d85-tg9cf" event={"ID":"de78648e-7956-4dc9-8905-28dc0700ff40","Type":"ContainerStarted","Data":"95537fd95b8df43437f7af7f455a7165cf440491757e5d944aa92cb62a411533"} Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.674935 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" event={"ID":"b878fdac-b49b-40d8-b0cb-af5d44f21f9d","Type":"ContainerStarted","Data":"633872fcce0f8a8064528855db4432ef232c38a6a601d30c6adf5ee1b5cb04a2"} Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.676755 4957 generic.go:334] "Generic (PLEG): container finished" podID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerID="d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5" exitCode=0 Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.676813 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerDied","Data":"d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5"} Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.676846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerStarted","Data":"7dde29a553c723a1984dd365e71b2c48417bef382afa9ee1055af809b83fb395"} Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.696774 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-ddc747d85-tg9cf" podStartSLOduration=1.696753861 podStartE2EDuration="1.696753861s" podCreationTimestamp="2025-11-28 21:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:04:49.687404069 +0000 UTC m=+929.156051988" watchObservedRunningTime="2025-11-28 21:04:49.696753861 +0000 UTC m=+929.165401760" Nov 28 21:04:49 crc kubenswrapper[4957]: I1128 21:04:49.741780 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9"] Nov 28 21:04:49 crc kubenswrapper[4957]: W1128 21:04:49.745291 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc781e919_f546_4bd7_b564_cd424540268c.slice/crio-812c9889ed6f3aef04b44a58a4882b30a43c5c37fe4147a811bcc510f45ec7e3 WatchSource:0}: Error finding container 812c9889ed6f3aef04b44a58a4882b30a43c5c37fe4147a811bcc510f45ec7e3: Status 404 returned error can't find the container with id 812c9889ed6f3aef04b44a58a4882b30a43c5c37fe4147a811bcc510f45ec7e3 Nov 28 21:04:50 crc kubenswrapper[4957]: I1128 21:04:50.694044 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" event={"ID":"c781e919-f546-4bd7-b564-cd424540268c","Type":"ContainerStarted","Data":"812c9889ed6f3aef04b44a58a4882b30a43c5c37fe4147a811bcc510f45ec7e3"} Nov 28 21:04:51 crc kubenswrapper[4957]: I1128 21:04:51.701929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" event={"ID":"b878fdac-b49b-40d8-b0cb-af5d44f21f9d","Type":"ContainerStarted","Data":"217ac1dbf92628aac5f3fce13c6f5ba5bb0034418ba0a6c633325166bb46c518"} Nov 28 21:04:51 crc kubenswrapper[4957]: I1128 21:04:51.704223 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerStarted","Data":"f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024"} Nov 28 21:04:51 crc kubenswrapper[4957]: I1128 21:04:51.706985 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" event={"ID":"9234d52f-6818-4ccf-ac79-4d5e4f3cce21","Type":"ContainerStarted","Data":"ec6e9adec3d8aeb281a48fa2d9083f71e9aba27c94027cd14c49c7c40a4030bb"} Nov 28 21:04:51 crc kubenswrapper[4957]: I1128 21:04:51.707264 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:04:51 crc kubenswrapper[4957]: I1128 21:04:51.709291 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4fhg7" event={"ID":"f9d6e935-f7a3-4a37-8d21-5bb73ef04186","Type":"ContainerStarted","Data":"bc1fa7eda7234f90364806b30bca1450b17c77a8165f57de205678aee31948c0"} Nov 28 21:04:51 crc kubenswrapper[4957]: I1128 21:04:51.747400 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" podStartSLOduration=2.804068309 podStartE2EDuration="4.747384518s" podCreationTimestamp="2025-11-28 21:04:47 +0000 UTC" firstStartedPulling="2025-11-28 21:04:49.228362058 +0000 UTC m=+928.697009977" lastFinishedPulling="2025-11-28 21:04:51.171678267 +0000 UTC m=+930.640326186" observedRunningTime="2025-11-28 21:04:51.743522862 +0000 UTC m=+931.212170781" watchObservedRunningTime="2025-11-28 21:04:51.747384518 +0000 UTC m=+931.216032427" Nov 28 21:04:52 crc kubenswrapper[4957]: I1128 21:04:52.718150 4957 generic.go:334] "Generic (PLEG): container finished" podID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerID="f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024" exitCode=0 Nov 28 21:04:52 crc kubenswrapper[4957]: I1128 21:04:52.721155 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerDied","Data":"f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024"} Nov 28 21:04:52 crc kubenswrapper[4957]: I1128 21:04:52.721932 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:52 crc kubenswrapper[4957]: I1128 21:04:52.737785 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4fhg7" podStartSLOduration=2.896687559 podStartE2EDuration="5.737612234s" podCreationTimestamp="2025-11-28 21:04:47 +0000 UTC" firstStartedPulling="2025-11-28 21:04:48.328418724 +0000 UTC m=+927.797066633" lastFinishedPulling="2025-11-28 21:04:51.169343379 +0000 UTC m=+930.637991308" observedRunningTime="2025-11-28 21:04:52.733289417 +0000 UTC m=+932.201937326" watchObservedRunningTime="2025-11-28 21:04:52.737612234 +0000 UTC m=+932.206260143" Nov 28 21:04:53 crc kubenswrapper[4957]: I1128 21:04:53.729985 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" event={"ID":"c781e919-f546-4bd7-b564-cd424540268c","Type":"ContainerStarted","Data":"2c18cbfd448b73be43671a3da817d592b47720f64cc00ac79cba228c5a467f0c"} Nov 28 21:04:53 crc kubenswrapper[4957]: I1128 21:04:53.750089 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-8fhk9" podStartSLOduration=2.362662822 podStartE2EDuration="5.750072632s" podCreationTimestamp="2025-11-28 21:04:48 +0000 UTC" firstStartedPulling="2025-11-28 21:04:49.748708572 +0000 UTC m=+929.217356481" lastFinishedPulling="2025-11-28 21:04:53.136118382 +0000 UTC m=+932.604766291" observedRunningTime="2025-11-28 21:04:53.744550395 +0000 UTC m=+933.213198304" watchObservedRunningTime="2025-11-28 21:04:53.750072632 +0000 UTC m=+933.218720541" Nov 28 21:04:54 crc kubenswrapper[4957]: I1128 21:04:54.739467 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" event={"ID":"b878fdac-b49b-40d8-b0cb-af5d44f21f9d","Type":"ContainerStarted","Data":"68dd301e64560521aa4f746f6edc4307170ff47d99392afcb0641adf395b1344"} Nov 28 21:04:54 crc kubenswrapper[4957]: I1128 21:04:54.741925 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerStarted","Data":"94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd"} Nov 28 21:04:54 crc kubenswrapper[4957]: I1128 21:04:54.760269 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-8gmh8" podStartSLOduration=2.261310296 podStartE2EDuration="7.760253074s" podCreationTimestamp="2025-11-28 21:04:47 +0000 UTC" firstStartedPulling="2025-11-28 21:04:48.817049631 +0000 UTC m=+928.285697530" lastFinishedPulling="2025-11-28 21:04:54.315992399 +0000 UTC m=+933.784640308" observedRunningTime="2025-11-28 21:04:54.757284481 +0000 UTC m=+934.225932390" watchObservedRunningTime="2025-11-28 21:04:54.760253074 +0000 UTC m=+934.228900983" Nov 28 21:04:54 crc kubenswrapper[4957]: I1128 21:04:54.785744 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ndqzm" podStartSLOduration=2.929904801 podStartE2EDuration="6.785725317s" podCreationTimestamp="2025-11-28 21:04:48 +0000 UTC" firstStartedPulling="2025-11-28 21:04:49.678725933 +0000 UTC m=+929.147373842" lastFinishedPulling="2025-11-28 21:04:53.534546449 +0000 UTC m=+933.003194358" observedRunningTime="2025-11-28 21:04:54.779721488 +0000 UTC m=+934.248369407" watchObservedRunningTime="2025-11-28 21:04:54.785725317 +0000 UTC m=+934.254373226" Nov 28 21:04:57 crc kubenswrapper[4957]: I1128 21:04:57.689035 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:57 crc kubenswrapper[4957]: I1128 21:04:57.771553 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm94h"] Nov 28 21:04:57 crc kubenswrapper[4957]: I1128 21:04:57.771803 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dm94h" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="registry-server" containerID="cri-o://a2bd14e6a2465b8c56221d64419eec07ad2bbae8bb8991941368ebda93cf8bfc" gracePeriod=2 Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.280671 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4fhg7" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.506268 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.506357 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.576933 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.678190 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.682420 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.693946 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.780737 4957 generic.go:334] "Generic (PLEG): container finished" podID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerID="a2bd14e6a2465b8c56221d64419eec07ad2bbae8bb8991941368ebda93cf8bfc" exitCode=0 Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.780879 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerDied","Data":"a2bd14e6a2465b8c56221d64419eec07ad2bbae8bb8991941368ebda93cf8bfc"} Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.786001 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.850849 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.859307 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-678cb99789-gjf6b"] Nov 28 21:04:58 crc kubenswrapper[4957]: I1128 21:04:58.972684 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.068779 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-utilities\") pod \"552ddd5f-63c6-497a-a211-01b6ec437a5a\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.068854 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-catalog-content\") pod \"552ddd5f-63c6-497a-a211-01b6ec437a5a\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.068901 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tq7k\" (UniqueName: \"kubernetes.io/projected/552ddd5f-63c6-497a-a211-01b6ec437a5a-kube-api-access-2tq7k\") pod \"552ddd5f-63c6-497a-a211-01b6ec437a5a\" (UID: \"552ddd5f-63c6-497a-a211-01b6ec437a5a\") " Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.071474 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-utilities" (OuterVolumeSpecName: "utilities") pod "552ddd5f-63c6-497a-a211-01b6ec437a5a" (UID: "552ddd5f-63c6-497a-a211-01b6ec437a5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.078409 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552ddd5f-63c6-497a-a211-01b6ec437a5a-kube-api-access-2tq7k" (OuterVolumeSpecName: "kube-api-access-2tq7k") pod "552ddd5f-63c6-497a-a211-01b6ec437a5a" (UID: "552ddd5f-63c6-497a-a211-01b6ec437a5a"). InnerVolumeSpecName "kube-api-access-2tq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.102478 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "552ddd5f-63c6-497a-a211-01b6ec437a5a" (UID: "552ddd5f-63c6-497a-a211-01b6ec437a5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.172368 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.172409 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552ddd5f-63c6-497a-a211-01b6ec437a5a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.172421 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tq7k\" (UniqueName: \"kubernetes.io/projected/552ddd5f-63c6-497a-a211-01b6ec437a5a-kube-api-access-2tq7k\") on node \"crc\" DevicePath \"\"" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.538814 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndqzm"] Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.788335 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm94h" event={"ID":"552ddd5f-63c6-497a-a211-01b6ec437a5a","Type":"ContainerDied","Data":"3acbc40ca12e7589669b56e11e65f6e25498e2c66beaec5a16cd030b56a875a0"} Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.788413 4957 scope.go:117] "RemoveContainer" containerID="a2bd14e6a2465b8c56221d64419eec07ad2bbae8bb8991941368ebda93cf8bfc" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.788365 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm94h" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.805125 4957 scope.go:117] "RemoveContainer" containerID="c1b135300f3f221c143376b097560ca350997ebad7659075abd3ec27063c431c" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.834035 4957 scope.go:117] "RemoveContainer" containerID="14d29061a22c020234a46ab4c7f4e7e292e2b6b693aa029bd7e5bf3c794fe107" Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.836933 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm94h"] Nov 28 21:04:59 crc kubenswrapper[4957]: I1128 21:04:59.843636 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm94h"] Nov 28 21:05:00 crc kubenswrapper[4957]: I1128 21:05:00.796826 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ndqzm" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="registry-server" containerID="cri-o://94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd" gracePeriod=2 Nov 28 21:05:00 crc kubenswrapper[4957]: I1128 21:05:00.825840 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" path="/var/lib/kubelet/pods/552ddd5f-63c6-497a-a211-01b6ec437a5a/volumes" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.788476 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.807515 4957 generic.go:334] "Generic (PLEG): container finished" podID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerID="94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd" exitCode=0 Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.807597 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerDied","Data":"94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd"} Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.807629 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndqzm" event={"ID":"190452fe-9708-4ff1-9aeb-c68069e838d3","Type":"ContainerDied","Data":"7dde29a553c723a1984dd365e71b2c48417bef382afa9ee1055af809b83fb395"} Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.807661 4957 scope.go:117] "RemoveContainer" containerID="94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.807831 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndqzm" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.809782 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-catalog-content\") pod \"190452fe-9708-4ff1-9aeb-c68069e838d3\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.809904 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lzf2\" (UniqueName: \"kubernetes.io/projected/190452fe-9708-4ff1-9aeb-c68069e838d3-kube-api-access-6lzf2\") pod \"190452fe-9708-4ff1-9aeb-c68069e838d3\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.809971 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-utilities\") pod \"190452fe-9708-4ff1-9aeb-c68069e838d3\" (UID: \"190452fe-9708-4ff1-9aeb-c68069e838d3\") " Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.811812 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-utilities" (OuterVolumeSpecName: "utilities") pod "190452fe-9708-4ff1-9aeb-c68069e838d3" (UID: "190452fe-9708-4ff1-9aeb-c68069e838d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.816219 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190452fe-9708-4ff1-9aeb-c68069e838d3-kube-api-access-6lzf2" (OuterVolumeSpecName: "kube-api-access-6lzf2") pod "190452fe-9708-4ff1-9aeb-c68069e838d3" (UID: "190452fe-9708-4ff1-9aeb-c68069e838d3"). InnerVolumeSpecName "kube-api-access-6lzf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.817090 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lzf2\" (UniqueName: \"kubernetes.io/projected/190452fe-9708-4ff1-9aeb-c68069e838d3-kube-api-access-6lzf2\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.817139 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.856349 4957 scope.go:117] "RemoveContainer" containerID="f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.869295 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "190452fe-9708-4ff1-9aeb-c68069e838d3" (UID: "190452fe-9708-4ff1-9aeb-c68069e838d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.875186 4957 scope.go:117] "RemoveContainer" containerID="d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.912286 4957 scope.go:117] "RemoveContainer" containerID="94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd" Nov 28 21:05:01 crc kubenswrapper[4957]: E1128 21:05:01.912657 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd\": container with ID starting with 94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd not found: ID does not exist" containerID="94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.912701 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd"} err="failed to get container status \"94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd\": rpc error: code = NotFound desc = could not find container \"94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd\": container with ID starting with 94628fe303cdc29cd40611b4598ce75201c43bb2a5c9bcf9fd97dc7144092dcd not found: ID does not exist" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.912721 4957 scope.go:117] "RemoveContainer" containerID="f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024" Nov 28 21:05:01 crc kubenswrapper[4957]: E1128 21:05:01.913090 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024\": container with ID starting with f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024 not found: ID does not exist" containerID="f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.913134 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024"} err="failed to get container status \"f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024\": rpc error: code = NotFound desc = could not find container \"f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024\": container with ID starting with f0e1ac1897b217a807bdcb85c750b395045c1d35fcd4b11c988f1e7b49a6a024 not found: ID does not exist" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.913166 4957 scope.go:117] "RemoveContainer" containerID="d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5" Nov 28 21:05:01 crc kubenswrapper[4957]: E1128 21:05:01.913746 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5\": container with ID starting with d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5 not found: ID does not exist" containerID="d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.913778 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5"} err="failed to get container status \"d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5\": rpc error: code = NotFound desc = could not find container \"d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5\": container with ID starting with d461506f1906069dfbadb142210d5ea5aec11bd6fdd4716be7084275c301bea5 not found: ID does not exist" Nov 28 21:05:01 crc kubenswrapper[4957]: I1128 21:05:01.919253 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/190452fe-9708-4ff1-9aeb-c68069e838d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:02 crc kubenswrapper[4957]: I1128 21:05:02.137723 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndqzm"] Nov 28 21:05:02 crc kubenswrapper[4957]: I1128 21:05:02.143246 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ndqzm"] Nov 28 21:05:02 crc kubenswrapper[4957]: I1128 21:05:02.822445 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" path="/var/lib/kubelet/pods/190452fe-9708-4ff1-9aeb-c68069e838d3/volumes" Nov 28 21:05:08 crc kubenswrapper[4957]: I1128 21:05:08.834238 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-t9nrr" Nov 28 21:05:23 crc kubenswrapper[4957]: I1128 21:05:23.909730 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-678cb99789-gjf6b" podUID="2009e762-b278-439e-9868-b694415b4b9f" containerName="console" containerID="cri-o://30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d" gracePeriod=15 Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.381055 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-678cb99789-gjf6b_2009e762-b278-439e-9868-b694415b4b9f/console/0.log" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.381118 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493240 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-console-config\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493319 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-oauth-config\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493355 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-oauth-serving-cert\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493406 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-serving-cert\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493435 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fbck\" (UniqueName: \"kubernetes.io/projected/2009e762-b278-439e-9868-b694415b4b9f-kube-api-access-8fbck\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493463 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-trusted-ca-bundle\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.493521 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-service-ca\") pod \"2009e762-b278-439e-9868-b694415b4b9f\" (UID: \"2009e762-b278-439e-9868-b694415b4b9f\") " Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.494068 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-console-config" (OuterVolumeSpecName: "console-config") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.494193 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-service-ca" (OuterVolumeSpecName: "service-ca") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.494549 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.494627 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.499176 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.499376 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2009e762-b278-439e-9868-b694415b4b9f-kube-api-access-8fbck" (OuterVolumeSpecName: "kube-api-access-8fbck") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "kube-api-access-8fbck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.514437 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2009e762-b278-439e-9868-b694415b4b9f" (UID: "2009e762-b278-439e-9868-b694415b4b9f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595757 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595801 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595811 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2009e762-b278-439e-9868-b694415b4b9f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595821 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fbck\" (UniqueName: \"kubernetes.io/projected/2009e762-b278-439e-9868-b694415b4b9f-kube-api-access-8fbck\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595831 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595839 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.595847 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2009e762-b278-439e-9868-b694415b4b9f-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.999517 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-678cb99789-gjf6b_2009e762-b278-439e-9868-b694415b4b9f/console/0.log" Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.999886 4957 generic.go:334] "Generic (PLEG): container finished" podID="2009e762-b278-439e-9868-b694415b4b9f" containerID="30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d" exitCode=2 Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.999920 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678cb99789-gjf6b" event={"ID":"2009e762-b278-439e-9868-b694415b4b9f","Type":"ContainerDied","Data":"30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d"} Nov 28 21:05:24 crc kubenswrapper[4957]: I1128 21:05:24.999949 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678cb99789-gjf6b" event={"ID":"2009e762-b278-439e-9868-b694415b4b9f","Type":"ContainerDied","Data":"313b511a8764a709e7fa3de647628f471e51829a18722b487a620afeea15da6d"} Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:24.999967 4957 scope.go:117] "RemoveContainer" containerID="30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.000125 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678cb99789-gjf6b" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.019837 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-678cb99789-gjf6b"] Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.022437 4957 scope.go:117] "RemoveContainer" containerID="30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.022950 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d\": container with ID starting with 30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d not found: ID does not exist" containerID="30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.023006 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d"} err="failed to get container status \"30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d\": rpc error: code = NotFound desc = could not find container \"30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d\": container with ID starting with 30f4fc8770775bdc79ed44ae9697244e463d4c78422c51f8b4758405c82e7d2d not found: ID does not exist" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.026966 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-678cb99789-gjf6b"] Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570083 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp"] Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570359 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="extract-utilities" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570378 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="extract-utilities" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570391 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="extract-content" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570398 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="extract-content" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570405 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2009e762-b278-439e-9868-b694415b4b9f" containerName="console" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570411 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2009e762-b278-439e-9868-b694415b4b9f" containerName="console" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570416 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="extract-content" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570423 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="extract-content" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570444 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="registry-server" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570450 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="registry-server" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570465 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="extract-utilities" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570471 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="extract-utilities" Nov 28 21:05:25 crc kubenswrapper[4957]: E1128 21:05:25.570480 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="registry-server" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570485 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="registry-server" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570599 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2009e762-b278-439e-9868-b694415b4b9f" containerName="console" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570614 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="552ddd5f-63c6-497a-a211-01b6ec437a5a" containerName="registry-server" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.570624 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="190452fe-9708-4ff1-9aeb-c68069e838d3" containerName="registry-server" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.571637 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.574398 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.581800 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp"] Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.713557 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qpw\" (UniqueName: \"kubernetes.io/projected/7681936e-c73f-4b09-b146-53988a18a40b-kube-api-access-s4qpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.713849 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.714075 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.815040 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qpw\" (UniqueName: \"kubernetes.io/projected/7681936e-c73f-4b09-b146-53988a18a40b-kube-api-access-s4qpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.815249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.815387 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.815854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.816061 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.838151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qpw\" (UniqueName: \"kubernetes.io/projected/7681936e-c73f-4b09-b146-53988a18a40b-kube-api-access-s4qpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:25 crc kubenswrapper[4957]: I1128 21:05:25.889940 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:26 crc kubenswrapper[4957]: I1128 21:05:26.309850 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp"] Nov 28 21:05:26 crc kubenswrapper[4957]: I1128 21:05:26.823068 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2009e762-b278-439e-9868-b694415b4b9f" path="/var/lib/kubelet/pods/2009e762-b278-439e-9868-b694415b4b9f/volumes" Nov 28 21:05:27 crc kubenswrapper[4957]: I1128 21:05:27.023621 4957 generic.go:334] "Generic (PLEG): container finished" podID="7681936e-c73f-4b09-b146-53988a18a40b" containerID="ba8f3bccd69b274e3367d9254d6dbfeddaae8c9b428a7b237a7dcc5e791a86c0" exitCode=0 Nov 28 21:05:27 crc kubenswrapper[4957]: I1128 21:05:27.023661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" event={"ID":"7681936e-c73f-4b09-b146-53988a18a40b","Type":"ContainerDied","Data":"ba8f3bccd69b274e3367d9254d6dbfeddaae8c9b428a7b237a7dcc5e791a86c0"} Nov 28 21:05:27 crc kubenswrapper[4957]: I1128 21:05:27.023684 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" event={"ID":"7681936e-c73f-4b09-b146-53988a18a40b","Type":"ContainerStarted","Data":"95953f5b9688c60fa16904aedbca60c04c3396732f0987963afcab9cf184610c"} Nov 28 21:05:27 crc kubenswrapper[4957]: I1128 21:05:27.026066 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:05:29 crc kubenswrapper[4957]: I1128 21:05:29.040199 4957 generic.go:334] "Generic (PLEG): container finished" podID="7681936e-c73f-4b09-b146-53988a18a40b" containerID="8cdf490e69ca38566e6cd5f0e5bfd45e9a80cc292527cfe04ff0946f87591a90" exitCode=0 Nov 28 21:05:29 crc kubenswrapper[4957]: I1128 21:05:29.040340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" event={"ID":"7681936e-c73f-4b09-b146-53988a18a40b","Type":"ContainerDied","Data":"8cdf490e69ca38566e6cd5f0e5bfd45e9a80cc292527cfe04ff0946f87591a90"} Nov 28 21:05:30 crc kubenswrapper[4957]: I1128 21:05:30.048399 4957 generic.go:334] "Generic (PLEG): container finished" podID="7681936e-c73f-4b09-b146-53988a18a40b" containerID="1598151e1704bbcdb819db907eb000060e85d0652014a2c27c8c9525f1471b96" exitCode=0 Nov 28 21:05:30 crc kubenswrapper[4957]: I1128 21:05:30.048440 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" event={"ID":"7681936e-c73f-4b09-b146-53988a18a40b","Type":"ContainerDied","Data":"1598151e1704bbcdb819db907eb000060e85d0652014a2c27c8c9525f1471b96"} Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.375652 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.507324 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-util\") pod \"7681936e-c73f-4b09-b146-53988a18a40b\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.507458 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4qpw\" (UniqueName: \"kubernetes.io/projected/7681936e-c73f-4b09-b146-53988a18a40b-kube-api-access-s4qpw\") pod \"7681936e-c73f-4b09-b146-53988a18a40b\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.507572 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-bundle\") pod \"7681936e-c73f-4b09-b146-53988a18a40b\" (UID: \"7681936e-c73f-4b09-b146-53988a18a40b\") " Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.508472 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-bundle" (OuterVolumeSpecName: "bundle") pod "7681936e-c73f-4b09-b146-53988a18a40b" (UID: "7681936e-c73f-4b09-b146-53988a18a40b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.516608 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7681936e-c73f-4b09-b146-53988a18a40b-kube-api-access-s4qpw" (OuterVolumeSpecName: "kube-api-access-s4qpw") pod "7681936e-c73f-4b09-b146-53988a18a40b" (UID: "7681936e-c73f-4b09-b146-53988a18a40b"). InnerVolumeSpecName "kube-api-access-s4qpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.528914 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-util" (OuterVolumeSpecName: "util") pod "7681936e-c73f-4b09-b146-53988a18a40b" (UID: "7681936e-c73f-4b09-b146-53988a18a40b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.608910 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.608949 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7681936e-c73f-4b09-b146-53988a18a40b-util\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:31 crc kubenswrapper[4957]: I1128 21:05:31.608969 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4qpw\" (UniqueName: \"kubernetes.io/projected/7681936e-c73f-4b09-b146-53988a18a40b-kube-api-access-s4qpw\") on node \"crc\" DevicePath \"\"" Nov 28 21:05:32 crc kubenswrapper[4957]: I1128 21:05:32.062516 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" event={"ID":"7681936e-c73f-4b09-b146-53988a18a40b","Type":"ContainerDied","Data":"95953f5b9688c60fa16904aedbca60c04c3396732f0987963afcab9cf184610c"} Nov 28 21:05:32 crc kubenswrapper[4957]: I1128 21:05:32.062552 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95953f5b9688c60fa16904aedbca60c04c3396732f0987963afcab9cf184610c" Nov 28 21:05:32 crc kubenswrapper[4957]: I1128 21:05:32.062610 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.468990 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt"] Nov 28 21:05:40 crc kubenswrapper[4957]: E1128 21:05:40.469673 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="pull" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.469685 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="pull" Nov 28 21:05:40 crc kubenswrapper[4957]: E1128 21:05:40.469702 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="util" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.469708 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="util" Nov 28 21:05:40 crc kubenswrapper[4957]: E1128 21:05:40.469718 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="extract" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.469724 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="extract" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.469848 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7681936e-c73f-4b09-b146-53988a18a40b" containerName="extract" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.470435 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.472469 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.473312 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bpkkb" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.473427 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.473559 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.474132 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.493413 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt"] Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.558377 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vk2\" (UniqueName: \"kubernetes.io/projected/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-kube-api-access-c7vk2\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.558633 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-webhook-cert\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.558714 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-apiservice-cert\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.660536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7vk2\" (UniqueName: \"kubernetes.io/projected/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-kube-api-access-c7vk2\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.660592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-webhook-cert\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.660616 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-apiservice-cert\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.667091 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-apiservice-cert\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.668782 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-webhook-cert\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.681497 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7vk2\" (UniqueName: \"kubernetes.io/projected/aa98f27d-5bda-41a4-bd59-1dff81ae7a65-kube-api-access-c7vk2\") pod \"metallb-operator-controller-manager-7cdb7495d5-qqgdt\" (UID: \"aa98f27d-5bda-41a4-bd59-1dff81ae7a65\") " pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.789249 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.795529 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6"] Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.796449 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.801731 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-p96m4" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.805474 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.805484 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.812065 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6"] Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.866238 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-webhook-cert\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.866609 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wq2q\" (UniqueName: \"kubernetes.io/projected/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-kube-api-access-2wq2q\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.866890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-apiservice-cert\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.967865 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-apiservice-cert\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.968015 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-webhook-cert\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:40 crc kubenswrapper[4957]: I1128 21:05:40.968053 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wq2q\" (UniqueName: \"kubernetes.io/projected/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-kube-api-access-2wq2q\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:41 crc kubenswrapper[4957]: I1128 21:05:40.998952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-webhook-cert\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:41 crc kubenswrapper[4957]: I1128 21:05:41.001416 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-apiservice-cert\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:41 crc kubenswrapper[4957]: I1128 21:05:41.006915 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wq2q\" (UniqueName: \"kubernetes.io/projected/b0fbca5a-3b56-4822-9a82-5ec342b6b89a-kube-api-access-2wq2q\") pod \"metallb-operator-webhook-server-66c879d448-sm6v6\" (UID: \"b0fbca5a-3b56-4822-9a82-5ec342b6b89a\") " pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:41 crc kubenswrapper[4957]: I1128 21:05:41.211923 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:05:41 crc kubenswrapper[4957]: I1128 21:05:41.382722 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt"] Nov 28 21:05:41 crc kubenswrapper[4957]: I1128 21:05:41.781380 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6"] Nov 28 21:05:41 crc kubenswrapper[4957]: W1128 21:05:41.793093 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0fbca5a_3b56_4822_9a82_5ec342b6b89a.slice/crio-f6b3454cca3570dea383847c85e054fa82957334ab57480e31e008304e97bc2f WatchSource:0}: Error finding container f6b3454cca3570dea383847c85e054fa82957334ab57480e31e008304e97bc2f: Status 404 returned error can't find the container with id f6b3454cca3570dea383847c85e054fa82957334ab57480e31e008304e97bc2f Nov 28 21:05:42 crc kubenswrapper[4957]: I1128 21:05:42.140739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" event={"ID":"aa98f27d-5bda-41a4-bd59-1dff81ae7a65","Type":"ContainerStarted","Data":"45b7d4f82d03a4bb8c0f2ffa69e2fa0ac04aa4374740c4011a49a7ceaa887418"} Nov 28 21:05:42 crc kubenswrapper[4957]: I1128 21:05:42.142076 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" event={"ID":"b0fbca5a-3b56-4822-9a82-5ec342b6b89a","Type":"ContainerStarted","Data":"f6b3454cca3570dea383847c85e054fa82957334ab57480e31e008304e97bc2f"} Nov 28 21:05:45 crc kubenswrapper[4957]: I1128 21:05:45.163736 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" event={"ID":"aa98f27d-5bda-41a4-bd59-1dff81ae7a65","Type":"ContainerStarted","Data":"04e2e50e1ca9d8e4f492192a48aa916134b80984c08dede581156537c91a2c46"} Nov 28 21:05:45 crc kubenswrapper[4957]: I1128 21:05:45.164556 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:05:45 crc kubenswrapper[4957]: I1128 21:05:45.186496 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" podStartSLOduration=2.101857967 podStartE2EDuration="5.186469137s" podCreationTimestamp="2025-11-28 21:05:40 +0000 UTC" firstStartedPulling="2025-11-28 21:05:41.395505846 +0000 UTC m=+980.864153755" lastFinishedPulling="2025-11-28 21:05:44.480117026 +0000 UTC m=+983.948764925" observedRunningTime="2025-11-28 21:05:45.181630397 +0000 UTC m=+984.650278306" watchObservedRunningTime="2025-11-28 21:05:45.186469137 +0000 UTC m=+984.655117046" Nov 28 21:05:47 crc kubenswrapper[4957]: I1128 21:05:47.178503 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" event={"ID":"b0fbca5a-3b56-4822-9a82-5ec342b6b89a","Type":"ContainerStarted","Data":"cec877630f00b9410cb7a5e6970e95f9a3e6f07bd4ffcb7cfef2517bd7c725f5"} Nov 28 21:05:47 crc kubenswrapper[4957]: I1128 21:05:47.215317 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" podStartSLOduration=2.785849213 podStartE2EDuration="7.215297678s" podCreationTimestamp="2025-11-28 21:05:40 +0000 UTC" firstStartedPulling="2025-11-28 21:05:41.796584529 +0000 UTC m=+981.265232438" lastFinishedPulling="2025-11-28 21:05:46.226032994 +0000 UTC m=+985.694680903" observedRunningTime="2025-11-28 21:05:47.209015253 +0000 UTC m=+986.677663162" watchObservedRunningTime="2025-11-28 21:05:47.215297678 +0000 UTC m=+986.683945587" Nov 28 21:05:48 crc kubenswrapper[4957]: I1128 21:05:48.183961 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:06:01 crc kubenswrapper[4957]: I1128 21:06:01.217351 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-66c879d448-sm6v6" Nov 28 21:06:09 crc kubenswrapper[4957]: I1128 21:06:09.025342 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:06:09 crc kubenswrapper[4957]: I1128 21:06:09.025798 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:06:20 crc kubenswrapper[4957]: I1128 21:06:20.792422 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7cdb7495d5-qqgdt" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.569494 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-j9v6v"] Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.572744 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.574768 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-px7qh" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.575163 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.575453 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.584060 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd"] Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.585149 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.597677 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd"] Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.598292 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhx4f\" (UniqueName: \"kubernetes.io/projected/58bfcebd-2036-46ce-8b59-d47e2b138c2f-kube-api-access-xhx4f\") pod \"frr-k8s-webhook-server-7fcb986d4-zqgbd\" (UID: \"58bfcebd-2036-46ce-8b59-d47e2b138c2f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638241 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-sockets\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638407 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics-certs\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638483 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-reloader\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638511 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-conf\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638552 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctdfn\" (UniqueName: \"kubernetes.io/projected/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-kube-api-access-ctdfn\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638598 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-startup\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.638713 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58bfcebd-2036-46ce-8b59-d47e2b138c2f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-zqgbd\" (UID: \"58bfcebd-2036-46ce-8b59-d47e2b138c2f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.670795 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dn826"] Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.682649 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.687552 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.687878 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.688191 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ttvdj" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.694278 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.723559 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-mtzhn"] Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.724860 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.729431 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.729549 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-mtzhn"] Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742014 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-reloader\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742051 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-conf\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742074 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctdfn\" (UniqueName: \"kubernetes.io/projected/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-kube-api-access-ctdfn\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-startup\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742144 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58bfcebd-2036-46ce-8b59-d47e2b138c2f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-zqgbd\" (UID: \"58bfcebd-2036-46ce-8b59-d47e2b138c2f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742169 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/df95d986-54c6-4e37-87f7-6775e4c24d4f-metallb-excludel2\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhx4f\" (UniqueName: \"kubernetes.io/projected/58bfcebd-2036-46ce-8b59-d47e2b138c2f-kube-api-access-xhx4f\") pod \"frr-k8s-webhook-server-7fcb986d4-zqgbd\" (UID: \"58bfcebd-2036-46ce-8b59-d47e2b138c2f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742232 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742253 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gsp\" (UniqueName: \"kubernetes.io/projected/df95d986-54c6-4e37-87f7-6775e4c24d4f-kube-api-access-79gsp\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742275 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742292 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-sockets\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742324 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics-certs\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742342 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-metrics-certs\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742707 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-reloader\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.742875 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-conf\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.743684 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-startup\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: E1128 21:06:21.747170 4957 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 28 21:06:21 crc kubenswrapper[4957]: E1128 21:06:21.747295 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics-certs podName:94df2aa8-79ae-408f-af72-a1f5ee3a05f2 nodeName:}" failed. No retries permitted until 2025-11-28 21:06:22.247277445 +0000 UTC m=+1021.715925344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics-certs") pod "frr-k8s-j9v6v" (UID: "94df2aa8-79ae-408f-af72-a1f5ee3a05f2") : secret "frr-k8s-certs-secret" not found Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.747690 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.747863 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-frr-sockets\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.765174 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58bfcebd-2036-46ce-8b59-d47e2b138c2f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-zqgbd\" (UID: \"58bfcebd-2036-46ce-8b59-d47e2b138c2f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.768628 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctdfn\" (UniqueName: \"kubernetes.io/projected/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-kube-api-access-ctdfn\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.778057 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhx4f\" (UniqueName: \"kubernetes.io/projected/58bfcebd-2036-46ce-8b59-d47e2b138c2f-kube-api-access-xhx4f\") pod \"frr-k8s-webhook-server-7fcb986d4-zqgbd\" (UID: \"58bfcebd-2036-46ce-8b59-d47e2b138c2f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.843890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/505298d8-01d1-4918-8329-04c935f6a8a0-cert\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.843938 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/505298d8-01d1-4918-8329-04c935f6a8a0-metrics-certs\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.843970 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/df95d986-54c6-4e37-87f7-6775e4c24d4f-metallb-excludel2\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.844008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gsp\" (UniqueName: \"kubernetes.io/projected/df95d986-54c6-4e37-87f7-6775e4c24d4f-kube-api-access-79gsp\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.844033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffp79\" (UniqueName: \"kubernetes.io/projected/505298d8-01d1-4918-8329-04c935f6a8a0-kube-api-access-ffp79\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.844052 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.844103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-metrics-certs\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: E1128 21:06:21.844231 4957 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 28 21:06:21 crc kubenswrapper[4957]: E1128 21:06:21.844279 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-metrics-certs podName:df95d986-54c6-4e37-87f7-6775e4c24d4f nodeName:}" failed. No retries permitted until 2025-11-28 21:06:22.344264499 +0000 UTC m=+1021.812912398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-metrics-certs") pod "speaker-dn826" (UID: "df95d986-54c6-4e37-87f7-6775e4c24d4f") : secret "speaker-certs-secret" not found Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.844988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/df95d986-54c6-4e37-87f7-6775e4c24d4f-metallb-excludel2\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: E1128 21:06:21.845204 4957 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 21:06:21 crc kubenswrapper[4957]: E1128 21:06:21.845249 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist podName:df95d986-54c6-4e37-87f7-6775e4c24d4f nodeName:}" failed. No retries permitted until 2025-11-28 21:06:22.345241653 +0000 UTC m=+1021.813889562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist") pod "speaker-dn826" (UID: "df95d986-54c6-4e37-87f7-6775e4c24d4f") : secret "metallb-memberlist" not found Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.863499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gsp\" (UniqueName: \"kubernetes.io/projected/df95d986-54c6-4e37-87f7-6775e4c24d4f-kube-api-access-79gsp\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.907732 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.945724 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffp79\" (UniqueName: \"kubernetes.io/projected/505298d8-01d1-4918-8329-04c935f6a8a0-kube-api-access-ffp79\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.945862 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/505298d8-01d1-4918-8329-04c935f6a8a0-cert\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.945887 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/505298d8-01d1-4918-8329-04c935f6a8a0-metrics-certs\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.949871 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/505298d8-01d1-4918-8329-04c935f6a8a0-metrics-certs\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.952621 4957 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.961457 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/505298d8-01d1-4918-8329-04c935f6a8a0-cert\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:21 crc kubenswrapper[4957]: I1128 21:06:21.971870 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffp79\" (UniqueName: \"kubernetes.io/projected/505298d8-01d1-4918-8329-04c935f6a8a0-kube-api-access-ffp79\") pod \"controller-f8648f98b-mtzhn\" (UID: \"505298d8-01d1-4918-8329-04c935f6a8a0\") " pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.047998 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.250271 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics-certs\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.253546 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94df2aa8-79ae-408f-af72-a1f5ee3a05f2-metrics-certs\") pod \"frr-k8s-j9v6v\" (UID: \"94df2aa8-79ae-408f-af72-a1f5ee3a05f2\") " pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.310299 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd"] Nov 28 21:06:22 crc kubenswrapper[4957]: W1128 21:06:22.317676 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58bfcebd_2036_46ce_8b59_d47e2b138c2f.slice/crio-ed956369db6100a529814b5fb664022569a00007cdb6c174bfddcc78c5072cee WatchSource:0}: Error finding container ed956369db6100a529814b5fb664022569a00007cdb6c174bfddcc78c5072cee: Status 404 returned error can't find the container with id ed956369db6100a529814b5fb664022569a00007cdb6c174bfddcc78c5072cee Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.351660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.351721 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-metrics-certs\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:22 crc kubenswrapper[4957]: E1128 21:06:22.351826 4957 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 21:06:22 crc kubenswrapper[4957]: E1128 21:06:22.351883 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist podName:df95d986-54c6-4e37-87f7-6775e4c24d4f nodeName:}" failed. No retries permitted until 2025-11-28 21:06:23.351867472 +0000 UTC m=+1022.820515381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist") pod "speaker-dn826" (UID: "df95d986-54c6-4e37-87f7-6775e4c24d4f") : secret "metallb-memberlist" not found Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.356182 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-metrics-certs\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.419025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" event={"ID":"58bfcebd-2036-46ce-8b59-d47e2b138c2f","Type":"ContainerStarted","Data":"ed956369db6100a529814b5fb664022569a00007cdb6c174bfddcc78c5072cee"} Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.436824 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-mtzhn"] Nov 28 21:06:22 crc kubenswrapper[4957]: W1128 21:06:22.442527 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505298d8_01d1_4918_8329_04c935f6a8a0.slice/crio-c016c4928acfa964105b4a384bf1396778af20b00a60a805ccc7a2a91089e8e6 WatchSource:0}: Error finding container c016c4928acfa964105b4a384bf1396778af20b00a60a805ccc7a2a91089e8e6: Status 404 returned error can't find the container with id c016c4928acfa964105b4a384bf1396778af20b00a60a805ccc7a2a91089e8e6 Nov 28 21:06:22 crc kubenswrapper[4957]: I1128 21:06:22.495052 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.367422 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.372892 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df95d986-54c6-4e37-87f7-6775e4c24d4f-memberlist\") pod \"speaker-dn826\" (UID: \"df95d986-54c6-4e37-87f7-6775e4c24d4f\") " pod="metallb-system/speaker-dn826" Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.428381 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-mtzhn" event={"ID":"505298d8-01d1-4918-8329-04c935f6a8a0","Type":"ContainerStarted","Data":"a27bceb650d77bb6c92ef02667eca49f20f3a272e3fddd642fe64f8731fa07fd"} Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.428424 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-mtzhn" event={"ID":"505298d8-01d1-4918-8329-04c935f6a8a0","Type":"ContainerStarted","Data":"98517fee0370c570d9fef319f20d49ef15d84e10226f08d747b5f75fc19c6204"} Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.428434 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-mtzhn" event={"ID":"505298d8-01d1-4918-8329-04c935f6a8a0","Type":"ContainerStarted","Data":"c016c4928acfa964105b4a384bf1396778af20b00a60a805ccc7a2a91089e8e6"} Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.428531 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.431171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"8a672e1be4836aea47ea865b5e6e5969805493d12673008ff6211259eaedfb35"} Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.450463 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-mtzhn" podStartSLOduration=2.450440255 podStartE2EDuration="2.450440255s" podCreationTimestamp="2025-11-28 21:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:06:23.446022386 +0000 UTC m=+1022.914670305" watchObservedRunningTime="2025-11-28 21:06:23.450440255 +0000 UTC m=+1022.919088164" Nov 28 21:06:23 crc kubenswrapper[4957]: I1128 21:06:23.516817 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dn826" Nov 28 21:06:23 crc kubenswrapper[4957]: W1128 21:06:23.539569 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf95d986_54c6_4e37_87f7_6775e4c24d4f.slice/crio-9b21b5668e8c2e0149f1d4eb09c241a707d2c51f9c927c51c677775cf1f1d636 WatchSource:0}: Error finding container 9b21b5668e8c2e0149f1d4eb09c241a707d2c51f9c927c51c677775cf1f1d636: Status 404 returned error can't find the container with id 9b21b5668e8c2e0149f1d4eb09c241a707d2c51f9c927c51c677775cf1f1d636 Nov 28 21:06:24 crc kubenswrapper[4957]: I1128 21:06:24.441414 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dn826" event={"ID":"df95d986-54c6-4e37-87f7-6775e4c24d4f","Type":"ContainerStarted","Data":"d8df1137d3d5f6d586a76c965f389fa5d7f912e53d9dc6eb2d7d600b03e23eb2"} Nov 28 21:06:24 crc kubenswrapper[4957]: I1128 21:06:24.441795 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dn826" event={"ID":"df95d986-54c6-4e37-87f7-6775e4c24d4f","Type":"ContainerStarted","Data":"6688229386f93bb3c67ce4884d339942af99873ee041b63248717f4c84ff6482"} Nov 28 21:06:24 crc kubenswrapper[4957]: I1128 21:06:24.441814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dn826" event={"ID":"df95d986-54c6-4e37-87f7-6775e4c24d4f","Type":"ContainerStarted","Data":"9b21b5668e8c2e0149f1d4eb09c241a707d2c51f9c927c51c677775cf1f1d636"} Nov 28 21:06:24 crc kubenswrapper[4957]: I1128 21:06:24.442002 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dn826" Nov 28 21:06:24 crc kubenswrapper[4957]: I1128 21:06:24.457620 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dn826" podStartSLOduration=3.457601083 podStartE2EDuration="3.457601083s" podCreationTimestamp="2025-11-28 21:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:06:24.455833849 +0000 UTC m=+1023.924481758" watchObservedRunningTime="2025-11-28 21:06:24.457601083 +0000 UTC m=+1023.926248992" Nov 28 21:06:30 crc kubenswrapper[4957]: I1128 21:06:30.503961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" event={"ID":"58bfcebd-2036-46ce-8b59-d47e2b138c2f","Type":"ContainerStarted","Data":"b2a5b1c475d43daf3120f549137a9fb3af3aee435e0ea4bc2379a50b28cd1cc8"} Nov 28 21:06:30 crc kubenswrapper[4957]: I1128 21:06:30.504572 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:30 crc kubenswrapper[4957]: I1128 21:06:30.506454 4957 generic.go:334] "Generic (PLEG): container finished" podID="94df2aa8-79ae-408f-af72-a1f5ee3a05f2" containerID="d73eea51dfc8ab6c7606a2bbf077e1b9a97803eda7d523b24f60aeea6fff2c90" exitCode=0 Nov 28 21:06:30 crc kubenswrapper[4957]: I1128 21:06:30.506629 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerDied","Data":"d73eea51dfc8ab6c7606a2bbf077e1b9a97803eda7d523b24f60aeea6fff2c90"} Nov 28 21:06:30 crc kubenswrapper[4957]: I1128 21:06:30.523945 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" podStartSLOduration=1.874392903 podStartE2EDuration="9.523923412s" podCreationTimestamp="2025-11-28 21:06:21 +0000 UTC" firstStartedPulling="2025-11-28 21:06:22.319710928 +0000 UTC m=+1021.788358837" lastFinishedPulling="2025-11-28 21:06:29.969241437 +0000 UTC m=+1029.437889346" observedRunningTime="2025-11-28 21:06:30.518812276 +0000 UTC m=+1029.987460185" watchObservedRunningTime="2025-11-28 21:06:30.523923412 +0000 UTC m=+1029.992571321" Nov 28 21:06:31 crc kubenswrapper[4957]: I1128 21:06:31.513940 4957 generic.go:334] "Generic (PLEG): container finished" podID="94df2aa8-79ae-408f-af72-a1f5ee3a05f2" containerID="d48f5a6cdf842538579e9b88a7fd67152507c8aaa9e8a73cca524b0566e4b998" exitCode=0 Nov 28 21:06:31 crc kubenswrapper[4957]: I1128 21:06:31.514022 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerDied","Data":"d48f5a6cdf842538579e9b88a7fd67152507c8aaa9e8a73cca524b0566e4b998"} Nov 28 21:06:32 crc kubenswrapper[4957]: I1128 21:06:32.054389 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-mtzhn" Nov 28 21:06:32 crc kubenswrapper[4957]: I1128 21:06:32.522218 4957 generic.go:334] "Generic (PLEG): container finished" podID="94df2aa8-79ae-408f-af72-a1f5ee3a05f2" containerID="c9c70376c7a18af00f9e695a1724aee8b64f16788abee186964be7c97f7b1328" exitCode=0 Nov 28 21:06:32 crc kubenswrapper[4957]: I1128 21:06:32.522399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerDied","Data":"c9c70376c7a18af00f9e695a1724aee8b64f16788abee186964be7c97f7b1328"} Nov 28 21:06:33 crc kubenswrapper[4957]: I1128 21:06:33.520384 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dn826" Nov 28 21:06:33 crc kubenswrapper[4957]: I1128 21:06:33.533308 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"8d376c49989ed0ff62a7ce0e0702f458bb2d07838b006e2d2dda7165a75ca32d"} Nov 28 21:06:33 crc kubenswrapper[4957]: I1128 21:06:33.533376 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"fefd4dc394d63889fb2ac3bb00ccc13a9eb2bc719658a2f41fe557f2de80604d"} Nov 28 21:06:33 crc kubenswrapper[4957]: I1128 21:06:33.533423 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"b2d005f2b2379e758964506eeaa0ee0b21b6d69ddb99de91f66e2f6d3311b5b9"} Nov 28 21:06:33 crc kubenswrapper[4957]: I1128 21:06:33.533431 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"27a9744a66077381ba45fe2f150cbc8b75ff6f926bcb973d93c22ffcb1d44055"} Nov 28 21:06:34 crc kubenswrapper[4957]: I1128 21:06:34.543784 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"3bf6117a32a78991fe634691f06f6ad9bfde0b883db39d6923d9cc1698544d0d"} Nov 28 21:06:34 crc kubenswrapper[4957]: I1128 21:06:34.543832 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9v6v" event={"ID":"94df2aa8-79ae-408f-af72-a1f5ee3a05f2","Type":"ContainerStarted","Data":"c14778f371eda649e0d15d81fab3f5f5db90369b4f0584b164c0153b7bb22004"} Nov 28 21:06:34 crc kubenswrapper[4957]: I1128 21:06:34.543936 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:34 crc kubenswrapper[4957]: I1128 21:06:34.573864 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-j9v6v" podStartSLOduration=6.168599608 podStartE2EDuration="13.573840336s" podCreationTimestamp="2025-11-28 21:06:21 +0000 UTC" firstStartedPulling="2025-11-28 21:06:22.583583522 +0000 UTC m=+1022.052231431" lastFinishedPulling="2025-11-28 21:06:29.98882424 +0000 UTC m=+1029.457472159" observedRunningTime="2025-11-28 21:06:34.572856492 +0000 UTC m=+1034.041504431" watchObservedRunningTime="2025-11-28 21:06:34.573840336 +0000 UTC m=+1034.042488245" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.674666 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gnq58"] Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.676268 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.678807 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.679162 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.684751 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-b5srt" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.697315 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gnq58"] Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.809856 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gf6w\" (UniqueName: \"kubernetes.io/projected/f76be402-2871-4e82-8c2a-8cc359b8c889-kube-api-access-4gf6w\") pod \"openstack-operator-index-gnq58\" (UID: \"f76be402-2871-4e82-8c2a-8cc359b8c889\") " pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.912499 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gf6w\" (UniqueName: \"kubernetes.io/projected/f76be402-2871-4e82-8c2a-8cc359b8c889-kube-api-access-4gf6w\") pod \"openstack-operator-index-gnq58\" (UID: \"f76be402-2871-4e82-8c2a-8cc359b8c889\") " pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:36 crc kubenswrapper[4957]: I1128 21:06:36.936886 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gf6w\" (UniqueName: \"kubernetes.io/projected/f76be402-2871-4e82-8c2a-8cc359b8c889-kube-api-access-4gf6w\") pod \"openstack-operator-index-gnq58\" (UID: \"f76be402-2871-4e82-8c2a-8cc359b8c889\") " pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:37 crc kubenswrapper[4957]: I1128 21:06:37.006373 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:37 crc kubenswrapper[4957]: I1128 21:06:37.495424 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:37 crc kubenswrapper[4957]: I1128 21:06:37.527617 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gnq58"] Nov 28 21:06:37 crc kubenswrapper[4957]: I1128 21:06:37.536791 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:37 crc kubenswrapper[4957]: I1128 21:06:37.577482 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gnq58" event={"ID":"f76be402-2871-4e82-8c2a-8cc359b8c889","Type":"ContainerStarted","Data":"4a8902f87dd45fad6bfe70a1f3a7aef04ddb845e7cb63a090ebeb5ae24ce2b35"} Nov 28 21:06:38 crc kubenswrapper[4957]: I1128 21:06:38.992454 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:06:38 crc kubenswrapper[4957]: I1128 21:06:38.992750 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:06:40 crc kubenswrapper[4957]: I1128 21:06:40.602018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gnq58" event={"ID":"f76be402-2871-4e82-8c2a-8cc359b8c889","Type":"ContainerStarted","Data":"15e189176f9d95d0ae7d77e04edade30a0cc88448109683c98d6b81939e13220"} Nov 28 21:06:40 crc kubenswrapper[4957]: I1128 21:06:40.615731 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gnq58" podStartSLOduration=2.415036275 podStartE2EDuration="4.615708971s" podCreationTimestamp="2025-11-28 21:06:36 +0000 UTC" firstStartedPulling="2025-11-28 21:06:37.552970351 +0000 UTC m=+1037.021618260" lastFinishedPulling="2025-11-28 21:06:39.753643047 +0000 UTC m=+1039.222290956" observedRunningTime="2025-11-28 21:06:40.61485056 +0000 UTC m=+1040.083498479" watchObservedRunningTime="2025-11-28 21:06:40.615708971 +0000 UTC m=+1040.084356890" Nov 28 21:06:41 crc kubenswrapper[4957]: I1128 21:06:41.917691 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-zqgbd" Nov 28 21:06:42 crc kubenswrapper[4957]: I1128 21:06:42.498343 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j9v6v" Nov 28 21:06:47 crc kubenswrapper[4957]: I1128 21:06:47.007467 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:47 crc kubenswrapper[4957]: I1128 21:06:47.008516 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:47 crc kubenswrapper[4957]: I1128 21:06:47.044678 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:47 crc kubenswrapper[4957]: I1128 21:06:47.677055 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gnq58" Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.869922 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq"] Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.874547 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.879863 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq"] Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.907012 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fm78t" Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.915068 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-bundle\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.915140 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-util\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:54 crc kubenswrapper[4957]: I1128 21:06:54.915261 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lz8c\" (UniqueName: \"kubernetes.io/projected/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-kube-api-access-9lz8c\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.017247 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-bundle\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.017315 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-util\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.017384 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lz8c\" (UniqueName: \"kubernetes.io/projected/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-kube-api-access-9lz8c\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.017766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-bundle\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.017831 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-util\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.034975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lz8c\" (UniqueName: \"kubernetes.io/projected/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-kube-api-access-9lz8c\") pod \"468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.232171 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.700638 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq"] Nov 28 21:06:55 crc kubenswrapper[4957]: I1128 21:06:55.713018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" event={"ID":"ac62e91b-26c6-4dac-bba1-54f4e46ff61e","Type":"ContainerStarted","Data":"02be2fd5f3a4a26e1f47ad6e8fc5313c51835140ac752baa87913e6b02bcaedc"} Nov 28 21:06:56 crc kubenswrapper[4957]: I1128 21:06:56.720565 4957 generic.go:334] "Generic (PLEG): container finished" podID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerID="693e058a5cf2ae9cc942c507f1ab9d1a081f3c4cc0dda79bba4a5bb74478718b" exitCode=0 Nov 28 21:06:56 crc kubenswrapper[4957]: I1128 21:06:56.720608 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" event={"ID":"ac62e91b-26c6-4dac-bba1-54f4e46ff61e","Type":"ContainerDied","Data":"693e058a5cf2ae9cc942c507f1ab9d1a081f3c4cc0dda79bba4a5bb74478718b"} Nov 28 21:06:57 crc kubenswrapper[4957]: I1128 21:06:57.727810 4957 generic.go:334] "Generic (PLEG): container finished" podID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerID="c5f942c33685be7c8faaa2ee544bd90f361d81c282017f559007771ebe6809a7" exitCode=0 Nov 28 21:06:57 crc kubenswrapper[4957]: I1128 21:06:57.728188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" event={"ID":"ac62e91b-26c6-4dac-bba1-54f4e46ff61e","Type":"ContainerDied","Data":"c5f942c33685be7c8faaa2ee544bd90f361d81c282017f559007771ebe6809a7"} Nov 28 21:06:58 crc kubenswrapper[4957]: I1128 21:06:58.738864 4957 generic.go:334] "Generic (PLEG): container finished" podID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerID="51776fa2dfd8c0992ca3b4b51913d9efe488ba54fe41a04b85acafa70e63f93b" exitCode=0 Nov 28 21:06:58 crc kubenswrapper[4957]: I1128 21:06:58.738960 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" event={"ID":"ac62e91b-26c6-4dac-bba1-54f4e46ff61e","Type":"ContainerDied","Data":"51776fa2dfd8c0992ca3b4b51913d9efe488ba54fe41a04b85acafa70e63f93b"} Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.048833 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.098490 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-bundle\") pod \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.098538 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lz8c\" (UniqueName: \"kubernetes.io/projected/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-kube-api-access-9lz8c\") pod \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.098625 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-util\") pod \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\" (UID: \"ac62e91b-26c6-4dac-bba1-54f4e46ff61e\") " Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.099180 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-bundle" (OuterVolumeSpecName: "bundle") pod "ac62e91b-26c6-4dac-bba1-54f4e46ff61e" (UID: "ac62e91b-26c6-4dac-bba1-54f4e46ff61e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.104834 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-kube-api-access-9lz8c" (OuterVolumeSpecName: "kube-api-access-9lz8c") pod "ac62e91b-26c6-4dac-bba1-54f4e46ff61e" (UID: "ac62e91b-26c6-4dac-bba1-54f4e46ff61e"). InnerVolumeSpecName "kube-api-access-9lz8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.119981 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-util" (OuterVolumeSpecName: "util") pod "ac62e91b-26c6-4dac-bba1-54f4e46ff61e" (UID: "ac62e91b-26c6-4dac-bba1-54f4e46ff61e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.200344 4957 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.200390 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lz8c\" (UniqueName: \"kubernetes.io/projected/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-kube-api-access-9lz8c\") on node \"crc\" DevicePath \"\"" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.200401 4957 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac62e91b-26c6-4dac-bba1-54f4e46ff61e-util\") on node \"crc\" DevicePath \"\"" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.758869 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" event={"ID":"ac62e91b-26c6-4dac-bba1-54f4e46ff61e","Type":"ContainerDied","Data":"02be2fd5f3a4a26e1f47ad6e8fc5313c51835140ac752baa87913e6b02bcaedc"} Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.759298 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02be2fd5f3a4a26e1f47ad6e8fc5313c51835140ac752baa87913e6b02bcaedc" Nov 28 21:07:00 crc kubenswrapper[4957]: I1128 21:07:00.759057 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.043095 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn"] Nov 28 21:07:07 crc kubenswrapper[4957]: E1128 21:07:07.043773 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="extract" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.043786 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="extract" Nov 28 21:07:07 crc kubenswrapper[4957]: E1128 21:07:07.043821 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="pull" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.043828 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="pull" Nov 28 21:07:07 crc kubenswrapper[4957]: E1128 21:07:07.043837 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="util" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.043844 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="util" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.043989 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac62e91b-26c6-4dac-bba1-54f4e46ff61e" containerName="extract" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.044523 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.046418 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-hmqh7" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.085861 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn"] Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.139179 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdr9\" (UniqueName: \"kubernetes.io/projected/917e81d1-a7a3-431f-9b6f-511334a57f50-kube-api-access-bzdr9\") pod \"openstack-operator-controller-operator-567f7c7dd7-9wckn\" (UID: \"917e81d1-a7a3-431f-9b6f-511334a57f50\") " pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.241336 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdr9\" (UniqueName: \"kubernetes.io/projected/917e81d1-a7a3-431f-9b6f-511334a57f50-kube-api-access-bzdr9\") pod \"openstack-operator-controller-operator-567f7c7dd7-9wckn\" (UID: \"917e81d1-a7a3-431f-9b6f-511334a57f50\") " pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.260533 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdr9\" (UniqueName: \"kubernetes.io/projected/917e81d1-a7a3-431f-9b6f-511334a57f50-kube-api-access-bzdr9\") pod \"openstack-operator-controller-operator-567f7c7dd7-9wckn\" (UID: \"917e81d1-a7a3-431f-9b6f-511334a57f50\") " pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.362974 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:07 crc kubenswrapper[4957]: I1128 21:07:07.843914 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn"] Nov 28 21:07:08 crc kubenswrapper[4957]: I1128 21:07:08.829089 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" event={"ID":"917e81d1-a7a3-431f-9b6f-511334a57f50","Type":"ContainerStarted","Data":"80b878fba4a12f20ec06204449016cdb1b31e10a070ce25372da1b8ad556fd29"} Nov 28 21:07:08 crc kubenswrapper[4957]: I1128 21:07:08.992277 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:07:08 crc kubenswrapper[4957]: I1128 21:07:08.992628 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:07:08 crc kubenswrapper[4957]: I1128 21:07:08.992693 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:07:08 crc kubenswrapper[4957]: I1128 21:07:08.993492 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54c98ea802e0128e09dc3a1d110ba16d7362c23b697cdc49ba44b4359ff9c798"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:07:08 crc kubenswrapper[4957]: I1128 21:07:08.993544 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://54c98ea802e0128e09dc3a1d110ba16d7362c23b697cdc49ba44b4359ff9c798" gracePeriod=600 Nov 28 21:07:09 crc kubenswrapper[4957]: I1128 21:07:09.843384 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="54c98ea802e0128e09dc3a1d110ba16d7362c23b697cdc49ba44b4359ff9c798" exitCode=0 Nov 28 21:07:09 crc kubenswrapper[4957]: I1128 21:07:09.843465 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"54c98ea802e0128e09dc3a1d110ba16d7362c23b697cdc49ba44b4359ff9c798"} Nov 28 21:07:09 crc kubenswrapper[4957]: I1128 21:07:09.843833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"aa7dcf960732566934369f18786490e508e6fd20d84c21ca9c77aae13bfcc8d4"} Nov 28 21:07:09 crc kubenswrapper[4957]: I1128 21:07:09.843858 4957 scope.go:117] "RemoveContainer" containerID="3bab2bbf40b4116f8715ef0abc775c76378c4f9b0a063bdb948a52f066fba5bb" Nov 28 21:07:13 crc kubenswrapper[4957]: I1128 21:07:13.883100 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" event={"ID":"917e81d1-a7a3-431f-9b6f-511334a57f50","Type":"ContainerStarted","Data":"e0ef715795c94e9c886c8f4b5b33fa920680e61ebad854a83a3d96ede5d3cb9e"} Nov 28 21:07:13 crc kubenswrapper[4957]: I1128 21:07:13.883655 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:13 crc kubenswrapper[4957]: I1128 21:07:13.919548 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" podStartSLOduration=2.266586809 podStartE2EDuration="6.919531422s" podCreationTimestamp="2025-11-28 21:07:07 +0000 UTC" firstStartedPulling="2025-11-28 21:07:07.845063362 +0000 UTC m=+1067.313711261" lastFinishedPulling="2025-11-28 21:07:12.498007965 +0000 UTC m=+1071.966655874" observedRunningTime="2025-11-28 21:07:13.907870384 +0000 UTC m=+1073.376518313" watchObservedRunningTime="2025-11-28 21:07:13.919531422 +0000 UTC m=+1073.388179331" Nov 28 21:07:17 crc kubenswrapper[4957]: I1128 21:07:17.366591 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-567f7c7dd7-9wckn" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.865051 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.866897 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.868508 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9pb9p" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.883294 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.884669 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.886818 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jjgfj" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.896122 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.897661 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.899495 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jg4g9" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.911269 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.919230 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.923600 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.936675 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj"] Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.959868 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.963329 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-plg25" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.999171 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pmb\" (UniqueName: \"kubernetes.io/projected/442226e4-b2b8-41c8-9278-2845b2fff0aa-kube-api-access-v8pmb\") pod \"designate-operator-controller-manager-78b4bc895b-kbhl9\" (UID: \"442226e4-b2b8-41c8-9278-2845b2fff0aa\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.999411 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2j69\" (UniqueName: \"kubernetes.io/projected/1a5138b3-6b84-43b0-bdc9-f867a83f4bc7-kube-api-access-p2j69\") pod \"cinder-operator-controller-manager-859b6ccc6-s85m7\" (UID: \"1a5138b3-6b84-43b0-bdc9-f867a83f4bc7\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:07:39 crc kubenswrapper[4957]: I1128 21:07:39.999624 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklzf\" (UniqueName: \"kubernetes.io/projected/12484928-2fe4-4bd6-bac2-e0f2e48829fe-kube-api-access-nklzf\") pod \"barbican-operator-controller-manager-7d9dfd778-sfgm2\" (UID: \"12484928-2fe4-4bd6-bac2-e0f2e48829fe\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.020362 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.042313 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.043777 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.047229 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-n7km2" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.062348 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.076901 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.078357 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.081343 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bwwmh" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.092892 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.100897 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.102331 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.103126 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8pmb\" (UniqueName: \"kubernetes.io/projected/442226e4-b2b8-41c8-9278-2845b2fff0aa-kube-api-access-v8pmb\") pod \"designate-operator-controller-manager-78b4bc895b-kbhl9\" (UID: \"442226e4-b2b8-41c8-9278-2845b2fff0aa\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.103174 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2j69\" (UniqueName: \"kubernetes.io/projected/1a5138b3-6b84-43b0-bdc9-f867a83f4bc7-kube-api-access-p2j69\") pod \"cinder-operator-controller-manager-859b6ccc6-s85m7\" (UID: \"1a5138b3-6b84-43b0-bdc9-f867a83f4bc7\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.103202 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22hn\" (UniqueName: \"kubernetes.io/projected/c330a33e-ec13-4ec0-869b-4847b9385d5d-kube-api-access-c22hn\") pod \"glance-operator-controller-manager-668d9c48b9-8t4fj\" (UID: \"c330a33e-ec13-4ec0-869b-4847b9385d5d\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.103262 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklzf\" (UniqueName: \"kubernetes.io/projected/12484928-2fe4-4bd6-bac2-e0f2e48829fe-kube-api-access-nklzf\") pod \"barbican-operator-controller-manager-7d9dfd778-sfgm2\" (UID: \"12484928-2fe4-4bd6-bac2-e0f2e48829fe\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.107316 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.110109 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.110873 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9vbb4" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.117450 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p2n7f" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.117679 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.118077 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.124171 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.142921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklzf\" (UniqueName: \"kubernetes.io/projected/12484928-2fe4-4bd6-bac2-e0f2e48829fe-kube-api-access-nklzf\") pod \"barbican-operator-controller-manager-7d9dfd778-sfgm2\" (UID: \"12484928-2fe4-4bd6-bac2-e0f2e48829fe\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.142988 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.144444 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.149511 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gnwwn" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.153539 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.154673 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8pmb\" (UniqueName: \"kubernetes.io/projected/442226e4-b2b8-41c8-9278-2845b2fff0aa-kube-api-access-v8pmb\") pod \"designate-operator-controller-manager-78b4bc895b-kbhl9\" (UID: \"442226e4-b2b8-41c8-9278-2845b2fff0aa\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.155275 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.159795 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ntjpg" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.164942 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.166282 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.167958 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2j69\" (UniqueName: \"kubernetes.io/projected/1a5138b3-6b84-43b0-bdc9-f867a83f4bc7-kube-api-access-p2j69\") pod \"cinder-operator-controller-manager-859b6ccc6-s85m7\" (UID: \"1a5138b3-6b84-43b0-bdc9-f867a83f4bc7\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.170889 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vmwn5" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.171025 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.193273 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.206469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mk6k\" (UniqueName: \"kubernetes.io/projected/c2cca951-4ada-44ec-ab43-a1f69ee7f7cb-kube-api-access-8mk6k\") pod \"ironic-operator-controller-manager-6c548fd776-8wqx7\" (UID: \"c2cca951-4ada-44ec-ab43-a1f69ee7f7cb\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.206742 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj9bk\" (UniqueName: \"kubernetes.io/projected/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-kube-api-access-gj9bk\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.206928 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c22hn\" (UniqueName: \"kubernetes.io/projected/c330a33e-ec13-4ec0-869b-4847b9385d5d-kube-api-access-c22hn\") pod \"glance-operator-controller-manager-668d9c48b9-8t4fj\" (UID: \"c330a33e-ec13-4ec0-869b-4847b9385d5d\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.207061 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5nv\" (UniqueName: \"kubernetes.io/projected/8eac7f46-0beb-4f3f-a530-2fed527b6383-kube-api-access-wt5nv\") pod \"horizon-operator-controller-manager-68c6d99b8f-xqjj5\" (UID: \"8eac7f46-0beb-4f3f-a530-2fed527b6383\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.207184 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.207303 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgbw\" (UniqueName: \"kubernetes.io/projected/d50c67da-27ca-4ab9-bf83-b2275ff3d801-kube-api-access-dpgbw\") pod \"heat-operator-controller-manager-5f64f6f8bb-v6427\" (UID: \"d50c67da-27ca-4ab9-bf83-b2275ff3d801\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.207625 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.216702 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.220725 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.222344 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.225272 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fw7bf" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.226788 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.247147 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.248517 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.254240 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dzdmk" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.259538 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c22hn\" (UniqueName: \"kubernetes.io/projected/c330a33e-ec13-4ec0-869b-4847b9385d5d-kube-api-access-c22hn\") pod \"glance-operator-controller-manager-668d9c48b9-8t4fj\" (UID: \"c330a33e-ec13-4ec0-869b-4847b9385d5d\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.264732 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.278811 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.286933 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-npt5l"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.288447 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.298735 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fbh4v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.301710 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.310858 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bct\" (UniqueName: \"kubernetes.io/projected/a8962e83-cc90-4844-9bca-96e85cf789bd-kube-api-access-k4bct\") pod \"keystone-operator-controller-manager-546d4bdf48-47tjl\" (UID: \"a8962e83-cc90-4844-9bca-96e85cf789bd\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.310911 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrz5\" (UniqueName: \"kubernetes.io/projected/47f33b35-a8d3-4981-8001-47b906a33fa6-kube-api-access-wrrz5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ln4j9\" (UID: \"47f33b35-a8d3-4981-8001-47b906a33fa6\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.310966 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt5nv\" (UniqueName: \"kubernetes.io/projected/8eac7f46-0beb-4f3f-a530-2fed527b6383-kube-api-access-wt5nv\") pod \"horizon-operator-controller-manager-68c6d99b8f-xqjj5\" (UID: \"8eac7f46-0beb-4f3f-a530-2fed527b6383\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.311012 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.311029 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpgbw\" (UniqueName: \"kubernetes.io/projected/d50c67da-27ca-4ab9-bf83-b2275ff3d801-kube-api-access-dpgbw\") pod \"heat-operator-controller-manager-5f64f6f8bb-v6427\" (UID: \"d50c67da-27ca-4ab9-bf83-b2275ff3d801\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.311060 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnz4n\" (UniqueName: \"kubernetes.io/projected/34faaa98-3568-4478-b968-b9cbe87c77f3-kube-api-access-rnz4n\") pod \"manila-operator-controller-manager-6546668bfd-2n5cx\" (UID: \"34faaa98-3568-4478-b968-b9cbe87c77f3\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.311116 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g4v7\" (UniqueName: \"kubernetes.io/projected/f510519a-6187-47f8-875e-3e9a5537c364-kube-api-access-7g4v7\") pod \"mariadb-operator-controller-manager-56bbcc9d85-d6twj\" (UID: \"f510519a-6187-47f8-875e-3e9a5537c364\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.311203 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mk6k\" (UniqueName: \"kubernetes.io/projected/c2cca951-4ada-44ec-ab43-a1f69ee7f7cb-kube-api-access-8mk6k\") pod \"ironic-operator-controller-manager-6c548fd776-8wqx7\" (UID: \"c2cca951-4ada-44ec-ab43-a1f69ee7f7cb\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.311238 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj9bk\" (UniqueName: \"kubernetes.io/projected/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-kube-api-access-gj9bk\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: E1128 21:07:40.311364 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:40 crc kubenswrapper[4957]: E1128 21:07:40.311474 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert podName:96a751a3-4af7-4cb8-b12b-46e0d177b6f3 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:40.811453707 +0000 UTC m=+1100.280101686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert") pod "infra-operator-controller-manager-57548d458d-ccmt8" (UID: "96a751a3-4af7-4cb8-b12b-46e0d177b6f3") : secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.336905 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.344823 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt5nv\" (UniqueName: \"kubernetes.io/projected/8eac7f46-0beb-4f3f-a530-2fed527b6383-kube-api-access-wt5nv\") pod \"horizon-operator-controller-manager-68c6d99b8f-xqjj5\" (UID: \"8eac7f46-0beb-4f3f-a530-2fed527b6383\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.345093 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpgbw\" (UniqueName: \"kubernetes.io/projected/d50c67da-27ca-4ab9-bf83-b2275ff3d801-kube-api-access-dpgbw\") pod \"heat-operator-controller-manager-5f64f6f8bb-v6427\" (UID: \"d50c67da-27ca-4ab9-bf83-b2275ff3d801\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.345672 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mk6k\" (UniqueName: \"kubernetes.io/projected/c2cca951-4ada-44ec-ab43-a1f69ee7f7cb-kube-api-access-8mk6k\") pod \"ironic-operator-controller-manager-6c548fd776-8wqx7\" (UID: \"c2cca951-4ada-44ec-ab43-a1f69ee7f7cb\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.359339 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.359854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj9bk\" (UniqueName: \"kubernetes.io/projected/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-kube-api-access-gj9bk\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.364990 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.371537 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.374415 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hwn7v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.385031 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-npt5l"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.416024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhnt\" (UniqueName: \"kubernetes.io/projected/c59777ed-7790-45bc-972a-f9fbe8fbccf4-kube-api-access-bhhnt\") pod \"nova-operator-controller-manager-697bc559fc-bn4dd\" (UID: \"c59777ed-7790-45bc-972a-f9fbe8fbccf4\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.416101 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bct\" (UniqueName: \"kubernetes.io/projected/a8962e83-cc90-4844-9bca-96e85cf789bd-kube-api-access-k4bct\") pod \"keystone-operator-controller-manager-546d4bdf48-47tjl\" (UID: \"a8962e83-cc90-4844-9bca-96e85cf789bd\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.416130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrz5\" (UniqueName: \"kubernetes.io/projected/47f33b35-a8d3-4981-8001-47b906a33fa6-kube-api-access-wrrz5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ln4j9\" (UID: \"47f33b35-a8d3-4981-8001-47b906a33fa6\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.416201 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnz4n\" (UniqueName: \"kubernetes.io/projected/34faaa98-3568-4478-b968-b9cbe87c77f3-kube-api-access-rnz4n\") pod \"manila-operator-controller-manager-6546668bfd-2n5cx\" (UID: \"34faaa98-3568-4478-b968-b9cbe87c77f3\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.416264 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g4v7\" (UniqueName: \"kubernetes.io/projected/f510519a-6187-47f8-875e-3e9a5537c364-kube-api-access-7g4v7\") pod \"mariadb-operator-controller-manager-56bbcc9d85-d6twj\" (UID: \"f510519a-6187-47f8-875e-3e9a5537c364\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.416301 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2zf\" (UniqueName: \"kubernetes.io/projected/844d1842-4247-4b95-8cca-1785d3ed80b8-kube-api-access-mq2zf\") pod \"octavia-operator-controller-manager-998648c74-npt5l\" (UID: \"844d1842-4247-4b95-8cca-1785d3ed80b8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.417146 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.418552 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.425234 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dg2dd" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.425710 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.426503 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.427723 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.435081 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.435377 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-k285s" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.467250 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.442565 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bct\" (UniqueName: \"kubernetes.io/projected/a8962e83-cc90-4844-9bca-96e85cf789bd-kube-api-access-k4bct\") pod \"keystone-operator-controller-manager-546d4bdf48-47tjl\" (UID: \"a8962e83-cc90-4844-9bca-96e85cf789bd\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.492788 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrz5\" (UniqueName: \"kubernetes.io/projected/47f33b35-a8d3-4981-8001-47b906a33fa6-kube-api-access-wrrz5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ln4j9\" (UID: \"47f33b35-a8d3-4981-8001-47b906a33fa6\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.504731 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnz4n\" (UniqueName: \"kubernetes.io/projected/34faaa98-3568-4478-b968-b9cbe87c77f3-kube-api-access-rnz4n\") pod \"manila-operator-controller-manager-6546668bfd-2n5cx\" (UID: \"34faaa98-3568-4478-b968-b9cbe87c77f3\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.509391 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.517228 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.517309 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2zf\" (UniqueName: \"kubernetes.io/projected/844d1842-4247-4b95-8cca-1785d3ed80b8-kube-api-access-mq2zf\") pod \"octavia-operator-controller-manager-998648c74-npt5l\" (UID: \"844d1842-4247-4b95-8cca-1785d3ed80b8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.517337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9smr\" (UniqueName: \"kubernetes.io/projected/02e155d2-76c6-4fca-b013-6c2dcf607cdb-kube-api-access-n9smr\") pod \"placement-operator-controller-manager-78f8948974-2w9h7\" (UID: \"02e155d2-76c6-4fca-b013-6c2dcf607cdb\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.517357 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mcm\" (UniqueName: \"kubernetes.io/projected/15b01ca6-83c4-47da-bd82-8b5c4a177561-kube-api-access-n7mcm\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.517393 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhnt\" (UniqueName: \"kubernetes.io/projected/c59777ed-7790-45bc-972a-f9fbe8fbccf4-kube-api-access-bhhnt\") pod \"nova-operator-controller-manager-697bc559fc-bn4dd\" (UID: \"c59777ed-7790-45bc-972a-f9fbe8fbccf4\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.517434 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqzf\" (UniqueName: \"kubernetes.io/projected/499b2d8c-a27a-46f1-9f38-8b29ab905da7-kube-api-access-6mqzf\") pod \"ovn-operator-controller-manager-b6456fdb6-cnzv4\" (UID: \"499b2d8c-a27a-46f1-9f38-8b29ab905da7\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.518316 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.522036 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.531361 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.533642 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.538606 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2qdl4" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.541342 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g4v7\" (UniqueName: \"kubernetes.io/projected/f510519a-6187-47f8-875e-3e9a5537c364-kube-api-access-7g4v7\") pod \"mariadb-operator-controller-manager-56bbcc9d85-d6twj\" (UID: \"f510519a-6187-47f8-875e-3e9a5537c364\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.560436 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.565167 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhnt\" (UniqueName: \"kubernetes.io/projected/c59777ed-7790-45bc-972a-f9fbe8fbccf4-kube-api-access-bhhnt\") pod \"nova-operator-controller-manager-697bc559fc-bn4dd\" (UID: \"c59777ed-7790-45bc-972a-f9fbe8fbccf4\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.567163 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.567309 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2zf\" (UniqueName: \"kubernetes.io/projected/844d1842-4247-4b95-8cca-1785d3ed80b8-kube-api-access-mq2zf\") pod \"octavia-operator-controller-manager-998648c74-npt5l\" (UID: \"844d1842-4247-4b95-8cca-1785d3ed80b8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.568495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.570757 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-v4bcb" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.587536 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.614116 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-v56f9"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.616009 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.618465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5j4w\" (UniqueName: \"kubernetes.io/projected/a3a9a0f3-6f26-4174-973d-049a1b8a2573-kube-api-access-b5j4w\") pod \"telemetry-operator-controller-manager-7f6754bd54-dbj68\" (UID: \"a3a9a0f3-6f26-4174-973d-049a1b8a2573\") " pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.618502 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.618548 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wkh\" (UniqueName: \"kubernetes.io/projected/b8066278-4583-4fe3-aed6-93543482ab1e-kube-api-access-l4wkh\") pod \"swift-operator-controller-manager-5f8c65bbfc-nq5h8\" (UID: \"b8066278-4583-4fe3-aed6-93543482ab1e\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.618583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9smr\" (UniqueName: \"kubernetes.io/projected/02e155d2-76c6-4fca-b013-6c2dcf607cdb-kube-api-access-n9smr\") pod \"placement-operator-controller-manager-78f8948974-2w9h7\" (UID: \"02e155d2-76c6-4fca-b013-6c2dcf607cdb\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.618616 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mcm\" (UniqueName: \"kubernetes.io/projected/15b01ca6-83c4-47da-bd82-8b5c4a177561-kube-api-access-n7mcm\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.618669 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqzf\" (UniqueName: \"kubernetes.io/projected/499b2d8c-a27a-46f1-9f38-8b29ab905da7-kube-api-access-6mqzf\") pod \"ovn-operator-controller-manager-b6456fdb6-cnzv4\" (UID: \"499b2d8c-a27a-46f1-9f38-8b29ab905da7\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:07:40 crc kubenswrapper[4957]: E1128 21:07:40.619037 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:40 crc kubenswrapper[4957]: E1128 21:07:40.619074 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert podName:15b01ca6-83c4-47da-bd82-8b5c4a177561 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:41.119062442 +0000 UTC m=+1100.587710351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" (UID: "15b01ca6-83c4-47da-bd82-8b5c4a177561") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.619666 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bsrvq" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.646146 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-v56f9"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.649705 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mcm\" (UniqueName: \"kubernetes.io/projected/15b01ca6-83c4-47da-bd82-8b5c4a177561-kube-api-access-n7mcm\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.651694 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9smr\" (UniqueName: \"kubernetes.io/projected/02e155d2-76c6-4fca-b013-6c2dcf607cdb-kube-api-access-n9smr\") pod \"placement-operator-controller-manager-78f8948974-2w9h7\" (UID: \"02e155d2-76c6-4fca-b013-6c2dcf607cdb\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.652038 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqzf\" (UniqueName: \"kubernetes.io/projected/499b2d8c-a27a-46f1-9f38-8b29ab905da7-kube-api-access-6mqzf\") pod \"ovn-operator-controller-manager-b6456fdb6-cnzv4\" (UID: \"499b2d8c-a27a-46f1-9f38-8b29ab905da7\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.652998 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.656134 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.687416 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.703947 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.713399 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.714810 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.716697 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dn4gq" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.719656 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4wkh\" (UniqueName: \"kubernetes.io/projected/b8066278-4583-4fe3-aed6-93543482ab1e-kube-api-access-l4wkh\") pod \"swift-operator-controller-manager-5f8c65bbfc-nq5h8\" (UID: \"b8066278-4583-4fe3-aed6-93543482ab1e\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.719769 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjbjt\" (UniqueName: \"kubernetes.io/projected/554f334d-cef4-48f9-bb57-03261844fbde-kube-api-access-cjbjt\") pod \"test-operator-controller-manager-5854674fcc-v56f9\" (UID: \"554f334d-cef4-48f9-bb57-03261844fbde\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.719833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5j4w\" (UniqueName: \"kubernetes.io/projected/a3a9a0f3-6f26-4174-973d-049a1b8a2573-kube-api-access-b5j4w\") pod \"telemetry-operator-controller-manager-7f6754bd54-dbj68\" (UID: \"a3a9a0f3-6f26-4174-973d-049a1b8a2573\") " pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.724439 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.731079 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.732750 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.740129 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.745873 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5j4w\" (UniqueName: \"kubernetes.io/projected/a3a9a0f3-6f26-4174-973d-049a1b8a2573-kube-api-access-b5j4w\") pod \"telemetry-operator-controller-manager-7f6754bd54-dbj68\" (UID: \"a3a9a0f3-6f26-4174-973d-049a1b8a2573\") " pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.763023 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4wkh\" (UniqueName: \"kubernetes.io/projected/b8066278-4583-4fe3-aed6-93543482ab1e-kube-api-access-l4wkh\") pod \"swift-operator-controller-manager-5f8c65bbfc-nq5h8\" (UID: \"b8066278-4583-4fe3-aed6-93543482ab1e\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.830618 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjbjt\" (UniqueName: \"kubernetes.io/projected/554f334d-cef4-48f9-bb57-03261844fbde-kube-api-access-cjbjt\") pod \"test-operator-controller-manager-5854674fcc-v56f9\" (UID: \"554f334d-cef4-48f9-bb57-03261844fbde\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.832676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.832736 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgth\" (UniqueName: \"kubernetes.io/projected/3d6f1d41-eaa5-4258-906c-5894ac698e5b-kube-api-access-nwgth\") pod \"watcher-operator-controller-manager-769dc69bc-qshzq\" (UID: \"3d6f1d41-eaa5-4258-906c-5894ac698e5b\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.833408 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:07:40 crc kubenswrapper[4957]: E1128 21:07:40.833798 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:40 crc kubenswrapper[4957]: E1128 21:07:40.833850 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert podName:96a751a3-4af7-4cb8-b12b-46e0d177b6f3 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:41.833836725 +0000 UTC m=+1101.302484634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert") pod "infra-operator-controller-manager-57548d458d-ccmt8" (UID: "96a751a3-4af7-4cb8-b12b-46e0d177b6f3") : secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.876033 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.877038 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.883814 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjbjt\" (UniqueName: \"kubernetes.io/projected/554f334d-cef4-48f9-bb57-03261844fbde-kube-api-access-cjbjt\") pod \"test-operator-controller-manager-5854674fcc-v56f9\" (UID: \"554f334d-cef4-48f9-bb57-03261844fbde\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.894276 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.897662 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cc6vp" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.897843 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.898466 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.935916 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.935989 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86rt\" (UniqueName: \"kubernetes.io/projected/aaaab82e-6456-4b20-9d92-f19458df9948-kube-api-access-f86rt\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.936063 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgth\" (UniqueName: \"kubernetes.io/projected/3d6f1d41-eaa5-4258-906c-5894ac698e5b-kube-api-access-nwgth\") pod \"watcher-operator-controller-manager-769dc69bc-qshzq\" (UID: \"3d6f1d41-eaa5-4258-906c-5894ac698e5b\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.936130 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.962493 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt"] Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.963854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgth\" (UniqueName: \"kubernetes.io/projected/3d6f1d41-eaa5-4258-906c-5894ac698e5b-kube-api-access-nwgth\") pod \"watcher-operator-controller-manager-769dc69bc-qshzq\" (UID: \"3d6f1d41-eaa5-4258-906c-5894ac698e5b\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.965184 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.973112 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r6642" Nov 28 21:07:40 crc kubenswrapper[4957]: I1128 21:07:40.980433 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt"] Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.017534 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.040864 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86rt\" (UniqueName: \"kubernetes.io/projected/aaaab82e-6456-4b20-9d92-f19458df9948-kube-api-access-f86rt\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.040965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwjr\" (UniqueName: \"kubernetes.io/projected/7a4dc310-e5f8-4a6f-8c8b-94a7faca596d-kube-api-access-fwwjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cfdjt\" (UID: \"7a4dc310-e5f8-4a6f-8c8b-94a7faca596d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.041249 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.041326 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.041499 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.041561 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:41.541545934 +0000 UTC m=+1101.010193843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.043374 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.043538 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.043570 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:41.543561773 +0000 UTC m=+1101.012209682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "metrics-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.055841 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.066189 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86rt\" (UniqueName: \"kubernetes.io/projected/aaaab82e-6456-4b20-9d92-f19458df9948-kube-api-access-f86rt\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.071745 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.145435 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwwjr\" (UniqueName: \"kubernetes.io/projected/7a4dc310-e5f8-4a6f-8c8b-94a7faca596d-kube-api-access-fwwjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cfdjt\" (UID: \"7a4dc310-e5f8-4a6f-8c8b-94a7faca596d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.145775 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.147251 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.147292 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert podName:15b01ca6-83c4-47da-bd82-8b5c4a177561 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:42.147279624 +0000 UTC m=+1101.615927533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" (UID: "15b01ca6-83c4-47da-bd82-8b5c4a177561") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.171508 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwwjr\" (UniqueName: \"kubernetes.io/projected/7a4dc310-e5f8-4a6f-8c8b-94a7faca596d-kube-api-access-fwwjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cfdjt\" (UID: \"7a4dc310-e5f8-4a6f-8c8b-94a7faca596d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.278682 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9"] Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.322756 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2"] Nov 28 21:07:41 crc kubenswrapper[4957]: W1128 21:07:41.356811 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12484928_2fe4_4bd6_bac2_e0f2e48829fe.slice/crio-b83ee76a38b39b80035a34c35e0dd51f2225074248b0d97aeba46e6b25d08ab1 WatchSource:0}: Error finding container b83ee76a38b39b80035a34c35e0dd51f2225074248b0d97aeba46e6b25d08ab1: Status 404 returned error can't find the container with id b83ee76a38b39b80035a34c35e0dd51f2225074248b0d97aeba46e6b25d08ab1 Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.433983 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.456048 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7"] Nov 28 21:07:41 crc kubenswrapper[4957]: W1128 21:07:41.530923 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a5138b3_6b84_43b0_bdc9_f867a83f4bc7.slice/crio-f531392b49465c77feeb2fe7236815e78b51bc86df3b118fcd131862906837e8 WatchSource:0}: Error finding container f531392b49465c77feeb2fe7236815e78b51bc86df3b118fcd131862906837e8: Status 404 returned error can't find the container with id f531392b49465c77feeb2fe7236815e78b51bc86df3b118fcd131862906837e8 Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.555245 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.555328 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.555450 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.555501 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:42.555487062 +0000 UTC m=+1102.024134971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.555897 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.555970 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:42.555952074 +0000 UTC m=+1102.024599983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "metrics-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.859911 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.860070 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: E1128 21:07:41.860129 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert podName:96a751a3-4af7-4cb8-b12b-46e0d177b6f3 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:43.860112443 +0000 UTC m=+1103.328760352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert") pod "infra-operator-controller-manager-57548d458d-ccmt8" (UID: "96a751a3-4af7-4cb8-b12b-46e0d177b6f3") : secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.988480 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427"] Nov 28 21:07:41 crc kubenswrapper[4957]: W1128 21:07:41.988791 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd50c67da_27ca_4ab9_bf83_b2275ff3d801.slice/crio-559706fa3a81f528561c0cf329e1e00b449e3bb7643a5df9fedab159b07e3452 WatchSource:0}: Error finding container 559706fa3a81f528561c0cf329e1e00b449e3bb7643a5df9fedab159b07e3452: Status 404 returned error can't find the container with id 559706fa3a81f528561c0cf329e1e00b449e3bb7643a5df9fedab159b07e3452 Nov 28 21:07:41 crc kubenswrapper[4957]: I1128 21:07:41.997881 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj"] Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.000375 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc330a33e_ec13_4ec0_869b_4847b9385d5d.slice/crio-bf0aaa598820d7bf24e5114ed00aa6130fc0ce129e015e48b90a5e189653d443 WatchSource:0}: Error finding container bf0aaa598820d7bf24e5114ed00aa6130fc0ce129e015e48b90a5e189653d443: Status 404 returned error can't find the container with id bf0aaa598820d7bf24e5114ed00aa6130fc0ce129e015e48b90a5e189653d443 Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.001919 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59777ed_7790_45bc_972a_f9fbe8fbccf4.slice/crio-9d4d42a66792e9a95f5b57fe9f4a2d7b25b9d30f390b93d5eb5fef74e647e34e WatchSource:0}: Error finding container 9d4d42a66792e9a95f5b57fe9f4a2d7b25b9d30f390b93d5eb5fef74e647e34e: Status 404 returned error can't find the container with id 9d4d42a66792e9a95f5b57fe9f4a2d7b25b9d30f390b93d5eb5fef74e647e34e Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.004291 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eac7f46_0beb_4f3f_a530_2fed527b6383.slice/crio-f23d9712f9774fc5953e9aaa77971632d178e949a192dedb8f896281fc4fdab8 WatchSource:0}: Error finding container f23d9712f9774fc5953e9aaa77971632d178e949a192dedb8f896281fc4fdab8: Status 404 returned error can't find the container with id f23d9712f9774fc5953e9aaa77971632d178e949a192dedb8f896281fc4fdab8 Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.006117 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.016848 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.126704 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" event={"ID":"c59777ed-7790-45bc-972a-f9fbe8fbccf4","Type":"ContainerStarted","Data":"9d4d42a66792e9a95f5b57fe9f4a2d7b25b9d30f390b93d5eb5fef74e647e34e"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.127709 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" event={"ID":"8eac7f46-0beb-4f3f-a530-2fed527b6383","Type":"ContainerStarted","Data":"f23d9712f9774fc5953e9aaa77971632d178e949a192dedb8f896281fc4fdab8"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.128612 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" event={"ID":"c330a33e-ec13-4ec0-869b-4847b9385d5d","Type":"ContainerStarted","Data":"bf0aaa598820d7bf24e5114ed00aa6130fc0ce129e015e48b90a5e189653d443"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.129425 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" event={"ID":"442226e4-b2b8-41c8-9278-2845b2fff0aa","Type":"ContainerStarted","Data":"a0e6c82f6c9c6d38a1b4df17e8bb8eff311c866da518ec74686ca5340dc8d4b0"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.130490 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" event={"ID":"d50c67da-27ca-4ab9-bf83-b2275ff3d801","Type":"ContainerStarted","Data":"559706fa3a81f528561c0cf329e1e00b449e3bb7643a5df9fedab159b07e3452"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.140115 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" event={"ID":"12484928-2fe4-4bd6-bac2-e0f2e48829fe","Type":"ContainerStarted","Data":"b83ee76a38b39b80035a34c35e0dd51f2225074248b0d97aeba46e6b25d08ab1"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.142620 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" event={"ID":"1a5138b3-6b84-43b0-bdc9-f867a83f4bc7","Type":"ContainerStarted","Data":"f531392b49465c77feeb2fe7236815e78b51bc86df3b118fcd131862906837e8"} Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.182003 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.182245 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.182293 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert podName:15b01ca6-83c4-47da-bd82-8b5c4a177561 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:44.182279838 +0000 UTC m=+1103.650927747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" (UID: "15b01ca6-83c4-47da-bd82-8b5c4a177561") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.188532 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.594759 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.595285 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.595540 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.595622 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:44.595604503 +0000 UTC m=+1104.064252412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "metrics-server-cert" not found Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.596000 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.596035 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:44.596023753 +0000 UTC m=+1104.064671662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "webhook-server-cert" not found Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.685106 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-npt5l"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.701530 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-v56f9"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.715441 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.722743 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.728097 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.734959 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.767717 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9"] Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.772828 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499b2d8c_a27a_46f1_9f38_8b29ab905da7.slice/crio-34082ac4628f8e438894887c5b518e27e1cbb39d78dba16f49f71f0ca55cfa91 WatchSource:0}: Error finding container 34082ac4628f8e438894887c5b518e27e1cbb39d78dba16f49f71f0ca55cfa91: Status 404 returned error can't find the container with id 34082ac4628f8e438894887c5b518e27e1cbb39d78dba16f49f71f0ca55cfa91 Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.773323 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod844d1842_4247_4b95_8cca_1785d3ed80b8.slice/crio-f819f7e090c2e8a240dc08c8a63249a2d08e6d4d72d045b5ffb8d8b2a11d37c5 WatchSource:0}: Error finding container f819f7e090c2e8a240dc08c8a63249a2d08e6d4d72d045b5ffb8d8b2a11d37c5: Status 404 returned error can't find the container with id f819f7e090c2e8a240dc08c8a63249a2d08e6d4d72d045b5ffb8d8b2a11d37c5 Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.785056 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7"] Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.806649 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4dc310_e5f8_4a6f_8c8b_94a7faca596d.slice/crio-8f753777863b6fb7bb971da1aea6385caff0825ed9a985bcf3da8076b6575676 WatchSource:0}: Error finding container 8f753777863b6fb7bb971da1aea6385caff0825ed9a985bcf3da8076b6575676: Status 404 returned error can't find the container with id 8f753777863b6fb7bb971da1aea6385caff0825ed9a985bcf3da8076b6575676 Nov 28 21:07:42 crc kubenswrapper[4957]: W1128 21:07:42.810962 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8066278_4583_4fe3_aed6_93543482ab1e.slice/crio-e07c206ccfc5c8bd4e10fba12e8f554b5bd36f2287d716f5d6653ac1bd3f630c WatchSource:0}: Error finding container e07c206ccfc5c8bd4e10fba12e8f554b5bd36f2287d716f5d6653ac1bd3f630c: Status 404 returned error can't find the container with id e07c206ccfc5c8bd4e10fba12e8f554b5bd36f2287d716f5d6653ac1bd3f630c Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.814619 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l4wkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-nq5h8_openstack-operators(b8066278-4583-4fe3-aed6-93543482ab1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.817853 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwgth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-qshzq_openstack-operators(3d6f1d41-eaa5-4258-906c-5894ac698e5b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.818094 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fwwjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-cfdjt_openstack-operators(7a4dc310-e5f8-4a6f-8c8b-94a7faca596d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.818733 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l4wkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-nq5h8_openstack-operators(b8066278-4583-4fe3-aed6-93543482ab1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.819384 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" podUID="7a4dc310-e5f8-4a6f-8c8b-94a7faca596d" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.821974 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" podUID="b8066278-4583-4fe3-aed6-93543482ab1e" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.825084 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwgth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-qshzq_openstack-operators(3d6f1d41-eaa5-4258-906c-5894ac698e5b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 21:07:42 crc kubenswrapper[4957]: E1128 21:07:42.826260 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" podUID="3d6f1d41-eaa5-4258-906c-5894ac698e5b" Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.850962 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.851112 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.851181 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq"] Nov 28 21:07:42 crc kubenswrapper[4957]: I1128 21:07:42.851263 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt"] Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.153040 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" event={"ID":"844d1842-4247-4b95-8cca-1785d3ed80b8","Type":"ContainerStarted","Data":"f819f7e090c2e8a240dc08c8a63249a2d08e6d4d72d045b5ffb8d8b2a11d37c5"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.155124 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" event={"ID":"f510519a-6187-47f8-875e-3e9a5537c364","Type":"ContainerStarted","Data":"ffd2133ffb71dbe901552f829d3225e69241b57bf189714b01566032ad649678"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.161257 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" event={"ID":"34faaa98-3568-4478-b968-b9cbe87c77f3","Type":"ContainerStarted","Data":"f920099e78d12a38a23461bfa8cbb38a63c79d97e8c86383c6513d96d42798e7"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.165944 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" event={"ID":"499b2d8c-a27a-46f1-9f38-8b29ab905da7","Type":"ContainerStarted","Data":"34082ac4628f8e438894887c5b518e27e1cbb39d78dba16f49f71f0ca55cfa91"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.168157 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" event={"ID":"7a4dc310-e5f8-4a6f-8c8b-94a7faca596d","Type":"ContainerStarted","Data":"8f753777863b6fb7bb971da1aea6385caff0825ed9a985bcf3da8076b6575676"} Nov 28 21:07:43 crc kubenswrapper[4957]: E1128 21:07:43.170325 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" podUID="7a4dc310-e5f8-4a6f-8c8b-94a7faca596d" Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.171867 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" event={"ID":"a3a9a0f3-6f26-4174-973d-049a1b8a2573","Type":"ContainerStarted","Data":"7d505e2e44ac8201aa0e1bd10f9444840756b42831e5d5c8eee6031a032c691b"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.174578 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" event={"ID":"02e155d2-76c6-4fca-b013-6c2dcf607cdb","Type":"ContainerStarted","Data":"d4d846f0acb70337dae5e46b2e3b989ab49210e9e748f1afa6a720a88a0acfb3"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.176780 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" event={"ID":"c2cca951-4ada-44ec-ab43-a1f69ee7f7cb","Type":"ContainerStarted","Data":"405136752f87f8b050ffa99d648056b1b57ac3bc8c5e3dce18e3d852c7a6d130"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.177997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" event={"ID":"47f33b35-a8d3-4981-8001-47b906a33fa6","Type":"ContainerStarted","Data":"5118b06188d2096181b08cf3099d675ea63d5ff52e98fd3551dd5c59aba139e9"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.181857 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" event={"ID":"b8066278-4583-4fe3-aed6-93543482ab1e","Type":"ContainerStarted","Data":"e07c206ccfc5c8bd4e10fba12e8f554b5bd36f2287d716f5d6653ac1bd3f630c"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.183519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" event={"ID":"554f334d-cef4-48f9-bb57-03261844fbde","Type":"ContainerStarted","Data":"1b4fd68885d3efcfd535753a59c3fafcf32d01fba7de69b431c2ee835fbf10e3"} Nov 28 21:07:43 crc kubenswrapper[4957]: E1128 21:07:43.185742 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" podUID="b8066278-4583-4fe3-aed6-93543482ab1e" Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.187374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" event={"ID":"a8962e83-cc90-4844-9bca-96e85cf789bd","Type":"ContainerStarted","Data":"7f2f05668cb20697076f884af1e76761f866f765c48e543e6df9e233fa7f13e5"} Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.191826 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" event={"ID":"3d6f1d41-eaa5-4258-906c-5894ac698e5b","Type":"ContainerStarted","Data":"fa300477d8ba96890316872263ce116fa4d92250468f7c423cdcb8ee79ffcedd"} Nov 28 21:07:43 crc kubenswrapper[4957]: E1128 21:07:43.194073 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" podUID="3d6f1d41-eaa5-4258-906c-5894ac698e5b" Nov 28 21:07:43 crc kubenswrapper[4957]: I1128 21:07:43.928343 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:43 crc kubenswrapper[4957]: E1128 21:07:43.929503 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:43 crc kubenswrapper[4957]: E1128 21:07:43.929575 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert podName:96a751a3-4af7-4cb8-b12b-46e0d177b6f3 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:47.929537489 +0000 UTC m=+1107.398185398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert") pod "infra-operator-controller-manager-57548d458d-ccmt8" (UID: "96a751a3-4af7-4cb8-b12b-46e0d177b6f3") : secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.203478 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" podUID="7a4dc310-e5f8-4a6f-8c8b-94a7faca596d" Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.203513 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" podUID="3d6f1d41-eaa5-4258-906c-5894ac698e5b" Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.204183 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" podUID="b8066278-4583-4fe3-aed6-93543482ab1e" Nov 28 21:07:44 crc kubenswrapper[4957]: I1128 21:07:44.234009 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.234420 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.234475 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert podName:15b01ca6-83c4-47da-bd82-8b5c4a177561 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:48.234458088 +0000 UTC m=+1107.703105987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" (UID: "15b01ca6-83c4-47da-bd82-8b5c4a177561") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:44 crc kubenswrapper[4957]: I1128 21:07:44.642829 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:44 crc kubenswrapper[4957]: I1128 21:07:44.642910 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.642999 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.643064 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:48.643049316 +0000 UTC m=+1108.111697295 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "metrics-server-cert" not found Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.643126 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 21:07:44 crc kubenswrapper[4957]: E1128 21:07:44.643260 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:48.64319104 +0000 UTC m=+1108.111838939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "webhook-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: I1128 21:07:48.018545 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.019329 4957 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.019387 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert podName:96a751a3-4af7-4cb8-b12b-46e0d177b6f3 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:56.01936791 +0000 UTC m=+1115.488015819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert") pod "infra-operator-controller-manager-57548d458d-ccmt8" (UID: "96a751a3-4af7-4cb8-b12b-46e0d177b6f3") : secret "infra-operator-webhook-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: I1128 21:07:48.323794 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.323950 4957 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.324003 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert podName:15b01ca6-83c4-47da-bd82-8b5c4a177561 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:56.32398951 +0000 UTC m=+1115.792637419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" (UID: "15b01ca6-83c4-47da-bd82-8b5c4a177561") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: I1128 21:07:48.730528 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.730726 4957 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: I1128 21:07:48.730760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.730800 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:56.730783247 +0000 UTC m=+1116.199431146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "metrics-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.730869 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 21:07:48 crc kubenswrapper[4957]: E1128 21:07:48.730923 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:07:56.7309115 +0000 UTC m=+1116.199559499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "webhook-server-cert" not found Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.081988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.091116 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96a751a3-4af7-4cb8-b12b-46e0d177b6f3-cert\") pod \"infra-operator-controller-manager-57548d458d-ccmt8\" (UID: \"96a751a3-4af7-4cb8-b12b-46e0d177b6f3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.238703 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.387850 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.391103 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15b01ca6-83c4-47da-bd82-8b5c4a177561-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v\" (UID: \"15b01ca6-83c4-47da-bd82-8b5c4a177561\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.556876 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.793786 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.794092 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:07:56 crc kubenswrapper[4957]: E1128 21:07:56.794229 4957 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 21:07:56 crc kubenswrapper[4957]: E1128 21:07:56.794277 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs podName:aaaab82e-6456-4b20-9d92-f19458df9948 nodeName:}" failed. No retries permitted until 2025-11-28 21:08:12.794265423 +0000 UTC m=+1132.262913332 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs") pod "openstack-operator-controller-manager-5fb8944fcb-x9n55" (UID: "aaaab82e-6456-4b20-9d92-f19458df9948") : secret "webhook-server-cert" not found Nov 28 21:07:56 crc kubenswrapper[4957]: I1128 21:07:56.798192 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-metrics-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:08:02 crc kubenswrapper[4957]: E1128 21:08:02.817908 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52" Nov 28 21:08:02 crc kubenswrapper[4957]: E1128 21:08:02.819310 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52" Nov 28 21:08:02 crc kubenswrapper[4957]: E1128 21:08:02.819471 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.17:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5j4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f6754bd54-dbj68_openstack-operators(a3a9a0f3-6f26-4174-973d-049a1b8a2573): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:03 crc kubenswrapper[4957]: E1128 21:08:03.463007 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Nov 28 21:08:03 crc kubenswrapper[4957]: E1128 21:08:03.463191 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrrz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-ln4j9_openstack-operators(47f33b35-a8d3-4981-8001-47b906a33fa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:05 crc kubenswrapper[4957]: E1128 21:08:05.103706 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Nov 28 21:08:05 crc kubenswrapper[4957]: E1128 21:08:05.104306 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9smr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-2w9h7_openstack-operators(02e155d2-76c6-4fca-b013-6c2dcf607cdb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:06 crc kubenswrapper[4957]: E1128 21:08:06.254349 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Nov 28 21:08:06 crc kubenswrapper[4957]: E1128 21:08:06.254523 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g4v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-d6twj_openstack-operators(f510519a-6187-47f8-875e-3e9a5537c364): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:06 crc kubenswrapper[4957]: E1128 21:08:06.701042 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Nov 28 21:08:06 crc kubenswrapper[4957]: E1128 21:08:06.701235 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8pmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-kbhl9_openstack-operators(442226e4-b2b8-41c8-9278-2845b2fff0aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:08 crc kubenswrapper[4957]: E1128 21:08:08.286358 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Nov 28 21:08:08 crc kubenswrapper[4957]: E1128 21:08:08.286551 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnz4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-2n5cx_openstack-operators(34faaa98-3568-4478-b968-b9cbe87c77f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:08 crc kubenswrapper[4957]: E1128 21:08:08.807461 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 28 21:08:08 crc kubenswrapper[4957]: E1128 21:08:08.807639 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhhnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-bn4dd_openstack-operators(c59777ed-7790-45bc-972a-f9fbe8fbccf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:09 crc kubenswrapper[4957]: E1128 21:08:09.667259 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Nov 28 21:08:09 crc kubenswrapper[4957]: E1128 21:08:09.667770 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4bct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-47tjl_openstack-operators(a8962e83-cc90-4844-9bca-96e85cf789bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:08:11 crc kubenswrapper[4957]: I1128 21:08:11.886753 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8"] Nov 28 21:08:11 crc kubenswrapper[4957]: W1128 21:08:11.946296 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a751a3_4af7_4cb8_b12b_46e0d177b6f3.slice/crio-2ca564e11a7920c6743b0c6c2411b9a69c054a1a1a5992915f3b9a6a35e57fc9 WatchSource:0}: Error finding container 2ca564e11a7920c6743b0c6c2411b9a69c054a1a1a5992915f3b9a6a35e57fc9: Status 404 returned error can't find the container with id 2ca564e11a7920c6743b0c6c2411b9a69c054a1a1a5992915f3b9a6a35e57fc9 Nov 28 21:08:11 crc kubenswrapper[4957]: I1128 21:08:11.986936 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v"] Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.432296 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" event={"ID":"1a5138b3-6b84-43b0-bdc9-f867a83f4bc7","Type":"ContainerStarted","Data":"841f77adfd4344a35c9de4b2d1b859792b4f51433b64a41c63a78f3cb69bb818"} Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.434244 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" event={"ID":"499b2d8c-a27a-46f1-9f38-8b29ab905da7","Type":"ContainerStarted","Data":"928b564dc8f3f17da2443bf48b8a3d44019754279dd19c48618b8c7e8250233d"} Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.435541 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" event={"ID":"96a751a3-4af7-4cb8-b12b-46e0d177b6f3","Type":"ContainerStarted","Data":"2ca564e11a7920c6743b0c6c2411b9a69c054a1a1a5992915f3b9a6a35e57fc9"} Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.437728 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" event={"ID":"c330a33e-ec13-4ec0-869b-4847b9385d5d","Type":"ContainerStarted","Data":"64a45980f373e71406fdb97184b6680a2fd01bd28ddc172413b6248c0da80641"} Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.440423 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" event={"ID":"d50c67da-27ca-4ab9-bf83-b2275ff3d801","Type":"ContainerStarted","Data":"db1fff55f288e1e8e5b12bc0233929a8cc1752efb23edcb5f312e21b19409ba2"} Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.810561 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.818044 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aaaab82e-6456-4b20-9d92-f19458df9948-webhook-certs\") pod \"openstack-operator-controller-manager-5fb8944fcb-x9n55\" (UID: \"aaaab82e-6456-4b20-9d92-f19458df9948\") " pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:08:12 crc kubenswrapper[4957]: I1128 21:08:12.894748 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:08:16 crc kubenswrapper[4957]: W1128 21:08:16.357103 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b01ca6_83c4_47da_bd82_8b5c4a177561.slice/crio-d8ddca23dd98e41b46e3ddede093628fbbe19a0cf6e224caec1834cd4497c89f WatchSource:0}: Error finding container d8ddca23dd98e41b46e3ddede093628fbbe19a0cf6e224caec1834cd4497c89f: Status 404 returned error can't find the container with id d8ddca23dd98e41b46e3ddede093628fbbe19a0cf6e224caec1834cd4497c89f Nov 28 21:08:16 crc kubenswrapper[4957]: I1128 21:08:16.474956 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" event={"ID":"15b01ca6-83c4-47da-bd82-8b5c4a177561","Type":"ContainerStarted","Data":"d8ddca23dd98e41b46e3ddede093628fbbe19a0cf6e224caec1834cd4497c89f"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.494381 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" event={"ID":"554f334d-cef4-48f9-bb57-03261844fbde","Type":"ContainerStarted","Data":"4300cae48275e20c82d7e711e619681196a8cc4b2f8fad6312313a615643c997"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.496693 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" event={"ID":"12484928-2fe4-4bd6-bac2-e0f2e48829fe","Type":"ContainerStarted","Data":"01908a27ccce98040e9bc198f84c8e3710689fe87f0977e6cf4cdb95fbb94e85"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.498049 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" event={"ID":"3d6f1d41-eaa5-4258-906c-5894ac698e5b","Type":"ContainerStarted","Data":"135d9ba64a342317f729254a9ae695494df46078b7548e52bd2fb41e8e75dac8"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.508556 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" event={"ID":"7a4dc310-e5f8-4a6f-8c8b-94a7faca596d","Type":"ContainerStarted","Data":"56650004977d422c40aad28c1536167a6b6950ed56739c1c02d994d7c5e79d36"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.510564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" event={"ID":"8eac7f46-0beb-4f3f-a530-2fed527b6383","Type":"ContainerStarted","Data":"0973e0998e4a4468e40e9aad886a6c51c7441af11a1ac4e9cab75b891517c2e0"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.517376 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" event={"ID":"c2cca951-4ada-44ec-ab43-a1f69ee7f7cb","Type":"ContainerStarted","Data":"34ee4bf646660ffb6b732f922c402eaa3562859033ded857c22894c80b9f4be1"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.519196 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" event={"ID":"b8066278-4583-4fe3-aed6-93543482ab1e","Type":"ContainerStarted","Data":"fab11e12d6827dd146049fa64c40de000d67f3c2931b59b5b924f9112a315d10"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.521437 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" event={"ID":"844d1842-4247-4b95-8cca-1785d3ed80b8","Type":"ContainerStarted","Data":"d1bd77ae8695d07a6a41c6593df6dc8483d204416c3e92925e4e0e0fae086666"} Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.543397 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cfdjt" podStartSLOduration=8.809764103 podStartE2EDuration="37.543373759s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.817862391 +0000 UTC m=+1102.286510300" lastFinishedPulling="2025-11-28 21:08:11.551472027 +0000 UTC m=+1131.020119956" observedRunningTime="2025-11-28 21:08:17.53106987 +0000 UTC m=+1136.999717779" watchObservedRunningTime="2025-11-28 21:08:17.543373759 +0000 UTC m=+1137.012021658" Nov 28 21:08:17 crc kubenswrapper[4957]: I1128 21:08:17.559931 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55"] Nov 28 21:08:18 crc kubenswrapper[4957]: I1128 21:08:18.533617 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" event={"ID":"aaaab82e-6456-4b20-9d92-f19458df9948","Type":"ContainerStarted","Data":"df2487c378c3d3fd7b849665eb66b4cbb21350b52edd53dddc9271f4a19e4789"} Nov 28 21:08:22 crc kubenswrapper[4957]: E1128 21:08:22.336966 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" podUID="47f33b35-a8d3-4981-8001-47b906a33fa6" Nov 28 21:08:22 crc kubenswrapper[4957]: E1128 21:08:22.361752 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" podUID="a8962e83-cc90-4844-9bca-96e85cf789bd" Nov 28 21:08:22 crc kubenswrapper[4957]: E1128 21:08:22.492438 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" podUID="f510519a-6187-47f8-875e-3e9a5537c364" Nov 28 21:08:22 crc kubenswrapper[4957]: E1128 21:08:22.504831 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" podUID="a3a9a0f3-6f26-4174-973d-049a1b8a2573" Nov 28 21:08:22 crc kubenswrapper[4957]: E1128 21:08:22.522317 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" podUID="02e155d2-76c6-4fca-b013-6c2dcf607cdb" Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.570141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" event={"ID":"15b01ca6-83c4-47da-bd82-8b5c4a177561","Type":"ContainerStarted","Data":"0c4e48cb0cb1ffa15833eed99d701ef24f4389da141875aa5468d119c7b43bac"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.571572 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" event={"ID":"47f33b35-a8d3-4981-8001-47b906a33fa6","Type":"ContainerStarted","Data":"08441a9a18ac3d98f40c205ef5e8f45d89c3357c903d24ce763c70c4dffe8e53"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.576454 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" event={"ID":"f510519a-6187-47f8-875e-3e9a5537c364","Type":"ContainerStarted","Data":"71c62a7bb5252b5bef33ebf331f12fef57a776450ecd44c61c4c179d06790680"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.579914 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" event={"ID":"a8962e83-cc90-4844-9bca-96e85cf789bd","Type":"ContainerStarted","Data":"2633dc19c7fa62baa24877048e798ba257b44c0294144782ed523f0184f77b22"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.591623 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" event={"ID":"aaaab82e-6456-4b20-9d92-f19458df9948","Type":"ContainerStarted","Data":"30c5c6c45aada3e29ae0c370130058b438f6c759ccb2db361bac225ad6efcafd"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.591773 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.606562 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" event={"ID":"499b2d8c-a27a-46f1-9f38-8b29ab905da7","Type":"ContainerStarted","Data":"39b13b93533e3a793756d00d0f18f376640e3115c67238d658a91d8e7722473a"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.607571 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.612821 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" event={"ID":"a3a9a0f3-6f26-4174-973d-049a1b8a2573","Type":"ContainerStarted","Data":"e83e9c3936931406b5f1440652b672379bbf08db75ce0d28d2d7980ac5ae2bd4"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.613872 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.641008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" event={"ID":"02e155d2-76c6-4fca-b013-6c2dcf607cdb","Type":"ContainerStarted","Data":"d0dab38e123b3e060d41913c1e6e269f9ca3428db2fef3b285fadc7e766bb620"} Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.686281 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cnzv4" podStartSLOduration=3.556127814 podStartE2EDuration="42.686259377s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.80365751 +0000 UTC m=+1102.272305419" lastFinishedPulling="2025-11-28 21:08:21.933789063 +0000 UTC m=+1141.402436982" observedRunningTime="2025-11-28 21:08:22.663461994 +0000 UTC m=+1142.132109903" watchObservedRunningTime="2025-11-28 21:08:22.686259377 +0000 UTC m=+1142.154907286" Nov 28 21:08:22 crc kubenswrapper[4957]: I1128 21:08:22.710690 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" podStartSLOduration=42.710673799 podStartE2EDuration="42.710673799s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:08:22.704437978 +0000 UTC m=+1142.173085887" watchObservedRunningTime="2025-11-28 21:08:22.710673799 +0000 UTC m=+1142.179321708" Nov 28 21:08:22 crc kubenswrapper[4957]: E1128 21:08:22.826465 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" podUID="442226e4-b2b8-41c8-9278-2845b2fff0aa" Nov 28 21:08:23 crc kubenswrapper[4957]: E1128 21:08:23.024845 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" podUID="34faaa98-3568-4478-b968-b9cbe87c77f3" Nov 28 21:08:23 crc kubenswrapper[4957]: E1128 21:08:23.137495 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" podUID="c59777ed-7790-45bc-972a-f9fbe8fbccf4" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.649395 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" event={"ID":"a3a9a0f3-6f26-4174-973d-049a1b8a2573","Type":"ContainerStarted","Data":"8e8a6c35d522ab0f0057f8144ddab77b8a24fc68e24105ff7f6a5017ba712b34"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.649468 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.652193 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" event={"ID":"15b01ca6-83c4-47da-bd82-8b5c4a177561","Type":"ContainerStarted","Data":"cbed713c0981ec0aa46f960f2b88071e92895451f0ae0a0e5602b477dbdacd0e"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.652347 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.654716 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" event={"ID":"844d1842-4247-4b95-8cca-1785d3ed80b8","Type":"ContainerStarted","Data":"1542e30b343a25ecdd21fdec8a718c691a764e5e93dd32dfc730380c22b21d67"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.654928 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.656488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" event={"ID":"1a5138b3-6b84-43b0-bdc9-f867a83f4bc7","Type":"ContainerStarted","Data":"716870ed6ee7b1fbc77b3e58ea19d24a9f7180812e68137deffa0ac2b3fb8f5d"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.656649 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.657671 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" event={"ID":"c59777ed-7790-45bc-972a-f9fbe8fbccf4","Type":"ContainerStarted","Data":"8ac562f904b969e8816016193b2d212ea0471ec99340765a39257f307fce1622"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.658674 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.658704 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.669820 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" event={"ID":"8eac7f46-0beb-4f3f-a530-2fed527b6383","Type":"ContainerStarted","Data":"c13ba266b0a1984f6b302397bb33867d09281c389a79ad6d27c767805d2d8533"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.670464 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.673280 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.686598 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" event={"ID":"d50c67da-27ca-4ab9-bf83-b2275ff3d801","Type":"ContainerStarted","Data":"0898ddc095f1ac2db8fda13318fee5c46a151790a7424a5bf3c3d2ff1bd7189a"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.686848 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.689940 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" podStartSLOduration=3.598085612 podStartE2EDuration="43.689924014s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.805647629 +0000 UTC m=+1102.274295538" lastFinishedPulling="2025-11-28 21:08:22.897486031 +0000 UTC m=+1142.366133940" observedRunningTime="2025-11-28 21:08:23.678691062 +0000 UTC m=+1143.147338971" watchObservedRunningTime="2025-11-28 21:08:23.689924014 +0000 UTC m=+1143.158571913" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.690155 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.690735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" event={"ID":"554f334d-cef4-48f9-bb57-03261844fbde","Type":"ContainerStarted","Data":"d66be5fcb20c8e03488a9eed94fe8cd90339d079616ed817b83205bcc5ca27ca"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.691480 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.694860 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" event={"ID":"12484928-2fe4-4bd6-bac2-e0f2e48829fe","Type":"ContainerStarted","Data":"4a5cb2eaa2d10351e9774c9542911733080131ab58d7adff493d3fefce009c44"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.695585 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.696746 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.699729 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.703557 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" event={"ID":"34faaa98-3568-4478-b968-b9cbe87c77f3","Type":"ContainerStarted","Data":"bb615059a0cce27c3b49f867c2ecc309db19daa84c21d3946241462c0881b666"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.732029 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" event={"ID":"96a751a3-4af7-4cb8-b12b-46e0d177b6f3","Type":"ContainerStarted","Data":"f5f5e6b11419f8fc623bb79c9fc0922798acbae9de21472edfbe172ce4dced39"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.732086 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.744515 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" event={"ID":"c330a33e-ec13-4ec0-869b-4847b9385d5d","Type":"ContainerStarted","Data":"1c92306577ba0458826e58b4a8e8fab41ebb49016f66dd56d26afdc7d0f4fd29"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.745195 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.752668 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.754021 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" event={"ID":"442226e4-b2b8-41c8-9278-2845b2fff0aa","Type":"ContainerStarted","Data":"01c28e5381aa6fecba016d0f33f7519965b9a06f55ee033018d2cca0a67f8920"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.766923 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" event={"ID":"c2cca951-4ada-44ec-ab43-a1f69ee7f7cb","Type":"ContainerStarted","Data":"a84009704ce8470ef233f0cbb54348657548271166f773f3e62d51c65a52b1cb"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.767913 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.768416 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" podStartSLOduration=38.323614647 podStartE2EDuration="43.768402798s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:08:16.370227761 +0000 UTC m=+1135.838875670" lastFinishedPulling="2025-11-28 21:08:21.815015922 +0000 UTC m=+1141.283663821" observedRunningTime="2025-11-28 21:08:23.766578304 +0000 UTC m=+1143.235226213" watchObservedRunningTime="2025-11-28 21:08:23.768402798 +0000 UTC m=+1143.237050707" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.771015 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" event={"ID":"b8066278-4583-4fe3-aed6-93543482ab1e","Type":"ContainerStarted","Data":"2f478e07ea3e6971ce45c303c51ddfdf4c1373a10fd8873ad56cca755b3d5360"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.771275 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.781512 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.788512 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.792380 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" event={"ID":"3d6f1d41-eaa5-4258-906c-5894ac698e5b","Type":"ContainerStarted","Data":"de17aed60a61c3e61044f5a2dcb02407a671a3a649732f64a1cdc614afad1046"} Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.809897 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-npt5l" podStartSLOduration=4.699971142 podStartE2EDuration="43.809881984s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.804244445 +0000 UTC m=+1102.272892354" lastFinishedPulling="2025-11-28 21:08:21.914155287 +0000 UTC m=+1141.382803196" observedRunningTime="2025-11-28 21:08:23.78866445 +0000 UTC m=+1143.257312359" watchObservedRunningTime="2025-11-28 21:08:23.809881984 +0000 UTC m=+1143.278529893" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.841604 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xqjj5" podStartSLOduration=4.933543636 podStartE2EDuration="44.841587324s" podCreationTimestamp="2025-11-28 21:07:39 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.006811826 +0000 UTC m=+1101.475459735" lastFinishedPulling="2025-11-28 21:08:21.914855514 +0000 UTC m=+1141.383503423" observedRunningTime="2025-11-28 21:08:23.818615846 +0000 UTC m=+1143.287263755" watchObservedRunningTime="2025-11-28 21:08:23.841587324 +0000 UTC m=+1143.310235233" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.851099 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-s85m7" podStartSLOduration=4.479205233 podStartE2EDuration="44.851080604s" podCreationTimestamp="2025-11-28 21:07:39 +0000 UTC" firstStartedPulling="2025-11-28 21:07:41.542652555 +0000 UTC m=+1101.011300464" lastFinishedPulling="2025-11-28 21:08:21.914527926 +0000 UTC m=+1141.383175835" observedRunningTime="2025-11-28 21:08:23.837057614 +0000 UTC m=+1143.305705523" watchObservedRunningTime="2025-11-28 21:08:23.851080604 +0000 UTC m=+1143.319728513" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.900742 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-sfgm2" podStartSLOduration=4.252897683 podStartE2EDuration="44.900723258s" podCreationTimestamp="2025-11-28 21:07:39 +0000 UTC" firstStartedPulling="2025-11-28 21:07:41.358455087 +0000 UTC m=+1100.827102996" lastFinishedPulling="2025-11-28 21:08:22.006280672 +0000 UTC m=+1141.474928571" observedRunningTime="2025-11-28 21:08:23.876688255 +0000 UTC m=+1143.345336164" watchObservedRunningTime="2025-11-28 21:08:23.900723258 +0000 UTC m=+1143.369371157" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.907444 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" podStartSLOduration=4.787379809 podStartE2EDuration="43.907427761s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.817736558 +0000 UTC m=+1102.286384467" lastFinishedPulling="2025-11-28 21:08:21.93778451 +0000 UTC m=+1141.406432419" observedRunningTime="2025-11-28 21:08:23.900877382 +0000 UTC m=+1143.369525291" watchObservedRunningTime="2025-11-28 21:08:23.907427761 +0000 UTC m=+1143.376075670" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.950472 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v6427" podStartSLOduration=5.016091224 podStartE2EDuration="44.950458585s" podCreationTimestamp="2025-11-28 21:07:39 +0000 UTC" firstStartedPulling="2025-11-28 21:07:41.997595018 +0000 UTC m=+1101.466242927" lastFinishedPulling="2025-11-28 21:08:21.931962379 +0000 UTC m=+1141.400610288" observedRunningTime="2025-11-28 21:08:23.949415319 +0000 UTC m=+1143.418063228" watchObservedRunningTime="2025-11-28 21:08:23.950458585 +0000 UTC m=+1143.419106494" Nov 28 21:08:23 crc kubenswrapper[4957]: I1128 21:08:23.952123 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-8t4fj" podStartSLOduration=5.042008085 podStartE2EDuration="44.952117635s" podCreationTimestamp="2025-11-28 21:07:39 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.003393181 +0000 UTC m=+1101.472041090" lastFinishedPulling="2025-11-28 21:08:21.913502741 +0000 UTC m=+1141.382150640" observedRunningTime="2025-11-28 21:08:23.924830953 +0000 UTC m=+1143.393478862" watchObservedRunningTime="2025-11-28 21:08:23.952117635 +0000 UTC m=+1143.420765544" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.024512 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" podStartSLOduration=34.196678636 podStartE2EDuration="44.024494611s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:08:11.961265868 +0000 UTC m=+1131.429913777" lastFinishedPulling="2025-11-28 21:08:21.789081843 +0000 UTC m=+1141.257729752" observedRunningTime="2025-11-28 21:08:23.987482613 +0000 UTC m=+1143.456130522" watchObservedRunningTime="2025-11-28 21:08:24.024494611 +0000 UTC m=+1143.493142520" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.054724 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-8wqx7" podStartSLOduration=4.340776719 podStartE2EDuration="44.054708554s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.201022111 +0000 UTC m=+1101.669670020" lastFinishedPulling="2025-11-28 21:08:21.914953946 +0000 UTC m=+1141.383601855" observedRunningTime="2025-11-28 21:08:24.045366377 +0000 UTC m=+1143.514014276" watchObservedRunningTime="2025-11-28 21:08:24.054708554 +0000 UTC m=+1143.523356463" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.099129 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nq5h8" podStartSLOduration=4.903044173 podStartE2EDuration="44.099101801s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.814417436 +0000 UTC m=+1102.283065355" lastFinishedPulling="2025-11-28 21:08:22.010475074 +0000 UTC m=+1141.479122983" observedRunningTime="2025-11-28 21:08:24.088488973 +0000 UTC m=+1143.557136882" watchObservedRunningTime="2025-11-28 21:08:24.099101801 +0000 UTC m=+1143.567749710" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.124424 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-v56f9" podStartSLOduration=4.910192124 podStartE2EDuration="44.124410024s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.717745429 +0000 UTC m=+1102.186393338" lastFinishedPulling="2025-11-28 21:08:21.931963329 +0000 UTC m=+1141.400611238" observedRunningTime="2025-11-28 21:08:24.118776138 +0000 UTC m=+1143.587424047" watchObservedRunningTime="2025-11-28 21:08:24.124410024 +0000 UTC m=+1143.593057933" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.802919 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" event={"ID":"34faaa98-3568-4478-b968-b9cbe87c77f3","Type":"ContainerStarted","Data":"a3bdc5728eddc0fa930deb3a5329a79e0569ec1f2d9e5042f0dafb0a929f5975"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.803061 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.804647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" event={"ID":"c59777ed-7790-45bc-972a-f9fbe8fbccf4","Type":"ContainerStarted","Data":"f485d532e9736514535d1c7ebc20b82d3e5ba436a7c038d8f555cae0b09a470e"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.804803 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.806549 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" event={"ID":"96a751a3-4af7-4cb8-b12b-46e0d177b6f3","Type":"ContainerStarted","Data":"139cf81eca50d355d8585e63f2bb621e17385dc6de91a0be39c5bd1fbfd25bf1"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.808107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" event={"ID":"02e155d2-76c6-4fca-b013-6c2dcf607cdb","Type":"ContainerStarted","Data":"70fa181f166f4bed07b61545965b274b439519cc28c916e81cdb85ec129abfa0"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.808701 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.810601 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" event={"ID":"442226e4-b2b8-41c8-9278-2845b2fff0aa","Type":"ContainerStarted","Data":"5e2e0908b5a2b81ac14ad11373587f5289b43e8a9ddc2653dda6c099a36b96a9"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.810679 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840682 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840723 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840739 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" event={"ID":"47f33b35-a8d3-4981-8001-47b906a33fa6","Type":"ContainerStarted","Data":"55efe7ada79469a94ac2b1e9fbfde4ac163f0b72604748b314be26f27292104b"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840771 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" event={"ID":"f510519a-6187-47f8-875e-3e9a5537c364","Type":"ContainerStarted","Data":"315c836c05e8b3c81db561b48d473905f11da481ce3e9a0ef437a120849e12d0"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840852 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-qshzq" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.840864 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" event={"ID":"a8962e83-cc90-4844-9bca-96e85cf789bd","Type":"ContainerStarted","Data":"c49707af7ebf5478174e1460717ae21d2f5eec4cd67b01514480b65e0bd06192"} Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.859018 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" podStartSLOduration=3.329473606 podStartE2EDuration="44.858959213s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.803518587 +0000 UTC m=+1102.272166496" lastFinishedPulling="2025-11-28 21:08:24.333004194 +0000 UTC m=+1143.801652103" observedRunningTime="2025-11-28 21:08:24.824404125 +0000 UTC m=+1144.293052034" watchObservedRunningTime="2025-11-28 21:08:24.858959213 +0000 UTC m=+1144.327607122" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.870867 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" podStartSLOduration=2.759093244 podStartE2EDuration="44.870851082s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.005707668 +0000 UTC m=+1101.474355577" lastFinishedPulling="2025-11-28 21:08:24.117465506 +0000 UTC m=+1143.586113415" observedRunningTime="2025-11-28 21:08:24.853321446 +0000 UTC m=+1144.321969355" watchObservedRunningTime="2025-11-28 21:08:24.870851082 +0000 UTC m=+1144.339498991" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.884044 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" podStartSLOduration=4.493314681 podStartE2EDuration="44.884026191s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.810732605 +0000 UTC m=+1102.279380514" lastFinishedPulling="2025-11-28 21:08:23.201444115 +0000 UTC m=+1142.670092024" observedRunningTime="2025-11-28 21:08:24.865980973 +0000 UTC m=+1144.334628882" watchObservedRunningTime="2025-11-28 21:08:24.884026191 +0000 UTC m=+1144.352674100" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.890084 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" podStartSLOduration=4.465313196 podStartE2EDuration="44.890074498s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.772182333 +0000 UTC m=+1102.240830242" lastFinishedPulling="2025-11-28 21:08:23.196943635 +0000 UTC m=+1142.665591544" observedRunningTime="2025-11-28 21:08:24.882063524 +0000 UTC m=+1144.350711573" watchObservedRunningTime="2025-11-28 21:08:24.890074498 +0000 UTC m=+1144.358722407" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.910944 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" podStartSLOduration=2.861475426 podStartE2EDuration="45.910926194s" podCreationTimestamp="2025-11-28 21:07:39 +0000 UTC" firstStartedPulling="2025-11-28 21:07:41.324651693 +0000 UTC m=+1100.793299602" lastFinishedPulling="2025-11-28 21:08:24.374102461 +0000 UTC m=+1143.842750370" observedRunningTime="2025-11-28 21:08:24.905615145 +0000 UTC m=+1144.374263054" watchObservedRunningTime="2025-11-28 21:08:24.910926194 +0000 UTC m=+1144.379574103" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.929160 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" podStartSLOduration=4.387643965 podStartE2EDuration="44.929143116s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.803828194 +0000 UTC m=+1102.272476103" lastFinishedPulling="2025-11-28 21:08:23.345327345 +0000 UTC m=+1142.813975254" observedRunningTime="2025-11-28 21:08:24.922586637 +0000 UTC m=+1144.391234546" watchObservedRunningTime="2025-11-28 21:08:24.929143116 +0000 UTC m=+1144.397791025" Nov 28 21:08:24 crc kubenswrapper[4957]: I1128 21:08:24.940887 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" podStartSLOduration=4.341302271 podStartE2EDuration="44.94087326s" podCreationTimestamp="2025-11-28 21:07:40 +0000 UTC" firstStartedPulling="2025-11-28 21:07:42.757798828 +0000 UTC m=+1102.226446737" lastFinishedPulling="2025-11-28 21:08:23.357369817 +0000 UTC m=+1142.826017726" observedRunningTime="2025-11-28 21:08:24.939846855 +0000 UTC m=+1144.408494764" watchObservedRunningTime="2025-11-28 21:08:24.94087326 +0000 UTC m=+1144.409521169" Nov 28 21:08:25 crc kubenswrapper[4957]: I1128 21:08:25.823391 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.221811 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kbhl9" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.656404 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-47tjl" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.659789 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-d6twj" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.691930 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-bn4dd" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.708179 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ln4j9" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.741141 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-2n5cx" Nov 28 21:08:30 crc kubenswrapper[4957]: I1128 21:08:30.836028 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-2w9h7" Nov 28 21:08:31 crc kubenswrapper[4957]: I1128 21:08:31.045916 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f6754bd54-dbj68" Nov 28 21:08:32 crc kubenswrapper[4957]: I1128 21:08:32.995466 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fb8944fcb-x9n55" Nov 28 21:08:36 crc kubenswrapper[4957]: I1128 21:08:36.243885 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-ccmt8" Nov 28 21:08:36 crc kubenswrapper[4957]: I1128 21:08:36.562317 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.120594 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-65gdn"] Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.131311 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.134821 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.135137 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-t2vh8" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.135160 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.135299 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.138668 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-65gdn"] Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.163419 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d56d73-d2b5-4eb2-9f26-c52642b5000b-config\") pod \"dnsmasq-dns-675f4bcbfc-65gdn\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.163541 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52zq\" (UniqueName: \"kubernetes.io/projected/87d56d73-d2b5-4eb2-9f26-c52642b5000b-kube-api-access-l52zq\") pod \"dnsmasq-dns-675f4bcbfc-65gdn\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.181576 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vmccd"] Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.183086 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.185549 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.192827 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vmccd"] Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.265334 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l52zq\" (UniqueName: \"kubernetes.io/projected/87d56d73-d2b5-4eb2-9f26-c52642b5000b-kube-api-access-l52zq\") pod \"dnsmasq-dns-675f4bcbfc-65gdn\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.265954 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.266063 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4fjl\" (UniqueName: \"kubernetes.io/projected/0305702e-2536-44cf-bc5d-fc3bea847db0-kube-api-access-p4fjl\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.266337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-config\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.266369 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d56d73-d2b5-4eb2-9f26-c52642b5000b-config\") pod \"dnsmasq-dns-675f4bcbfc-65gdn\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.267107 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d56d73-d2b5-4eb2-9f26-c52642b5000b-config\") pod \"dnsmasq-dns-675f4bcbfc-65gdn\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.292211 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l52zq\" (UniqueName: \"kubernetes.io/projected/87d56d73-d2b5-4eb2-9f26-c52642b5000b-kube-api-access-l52zq\") pod \"dnsmasq-dns-675f4bcbfc-65gdn\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.367332 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.367421 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4fjl\" (UniqueName: \"kubernetes.io/projected/0305702e-2536-44cf-bc5d-fc3bea847db0-kube-api-access-p4fjl\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.367458 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-config\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.368841 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-config\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.369482 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.386352 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4fjl\" (UniqueName: \"kubernetes.io/projected/0305702e-2536-44cf-bc5d-fc3bea847db0-kube-api-access-p4fjl\") pod \"dnsmasq-dns-78dd6ddcc-vmccd\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.458454 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.500144 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:08:54 crc kubenswrapper[4957]: I1128 21:08:54.937471 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-65gdn"] Nov 28 21:08:55 crc kubenswrapper[4957]: W1128 21:08:55.009416 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0305702e_2536_44cf_bc5d_fc3bea847db0.slice/crio-cba4f836370a12403fa8571bc32fc2712abe29b3226a1a906759e7cd01883266 WatchSource:0}: Error finding container cba4f836370a12403fa8571bc32fc2712abe29b3226a1a906759e7cd01883266: Status 404 returned error can't find the container with id cba4f836370a12403fa8571bc32fc2712abe29b3226a1a906759e7cd01883266 Nov 28 21:08:55 crc kubenswrapper[4957]: I1128 21:08:55.010509 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vmccd"] Nov 28 21:08:55 crc kubenswrapper[4957]: I1128 21:08:55.240747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" event={"ID":"0305702e-2536-44cf-bc5d-fc3bea847db0","Type":"ContainerStarted","Data":"cba4f836370a12403fa8571bc32fc2712abe29b3226a1a906759e7cd01883266"} Nov 28 21:08:55 crc kubenswrapper[4957]: I1128 21:08:55.242647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" event={"ID":"87d56d73-d2b5-4eb2-9f26-c52642b5000b","Type":"ContainerStarted","Data":"a9e46c438740ea9adefa0814ea1fea2e31abaa91e2ccc497e073c659ca990565"} Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.028509 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-65gdn"] Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.065479 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99cfd"] Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.067013 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.110433 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99cfd"] Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.121095 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7f4m\" (UniqueName: \"kubernetes.io/projected/901e1c94-d356-4ed1-998a-a7910a1e0510-kube-api-access-l7f4m\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.121171 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-dns-svc\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.121385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-config\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.222645 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7f4m\" (UniqueName: \"kubernetes.io/projected/901e1c94-d356-4ed1-998a-a7910a1e0510-kube-api-access-l7f4m\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.222758 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-dns-svc\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.222796 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-config\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.223637 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-dns-svc\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.223800 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-config\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.244566 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7f4m\" (UniqueName: \"kubernetes.io/projected/901e1c94-d356-4ed1-998a-a7910a1e0510-kube-api-access-l7f4m\") pod \"dnsmasq-dns-666b6646f7-99cfd\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.354676 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vmccd"] Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.403788 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qwwv6"] Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.405414 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.421122 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qwwv6"] Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.427355 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-config\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.427452 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wc7d\" (UniqueName: \"kubernetes.io/projected/1b34f895-848b-4d42-bacc-04dd981362c9-kube-api-access-2wc7d\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.427521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.434057 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.530807 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-config\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.531324 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wc7d\" (UniqueName: \"kubernetes.io/projected/1b34f895-848b-4d42-bacc-04dd981362c9-kube-api-access-2wc7d\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.531678 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.532370 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.532547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-config\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.550670 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wc7d\" (UniqueName: \"kubernetes.io/projected/1b34f895-848b-4d42-bacc-04dd981362c9-kube-api-access-2wc7d\") pod \"dnsmasq-dns-57d769cc4f-qwwv6\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.732780 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:08:57 crc kubenswrapper[4957]: I1128 21:08:57.983251 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99cfd"] Nov 28 21:08:58 crc kubenswrapper[4957]: W1128 21:08:58.001292 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod901e1c94_d356_4ed1_998a_a7910a1e0510.slice/crio-a1b4c5550818481a24d12d75afbf855b29e8d32c916ba1b660ff7a638f41ac14 WatchSource:0}: Error finding container a1b4c5550818481a24d12d75afbf855b29e8d32c916ba1b660ff7a638f41ac14: Status 404 returned error can't find the container with id a1b4c5550818481a24d12d75afbf855b29e8d32c916ba1b660ff7a638f41ac14 Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.210888 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.214580 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.218623 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.218703 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.218987 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.219059 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2szl" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.219003 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.219233 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.219411 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.232372 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.292598 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" event={"ID":"901e1c94-d356-4ed1-998a-a7910a1e0510","Type":"ContainerStarted","Data":"a1b4c5550818481a24d12d75afbf855b29e8d32c916ba1b660ff7a638f41ac14"} Nov 28 21:08:58 crc kubenswrapper[4957]: W1128 21:08:58.301009 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b34f895_848b_4d42_bacc_04dd981362c9.slice/crio-bc575d718860854d1fd05102374791d0f72f462938f5a5bbbb0540fd9e7eb774 WatchSource:0}: Error finding container bc575d718860854d1fd05102374791d0f72f462938f5a5bbbb0540fd9e7eb774: Status 404 returned error can't find the container with id bc575d718860854d1fd05102374791d0f72f462938f5a5bbbb0540fd9e7eb774 Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.308069 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qwwv6"] Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.345747 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.345890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.345914 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.345949 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346000 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2h5\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-kube-api-access-cm2h5\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346061 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-config-data\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346137 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346160 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346194 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396562bc-990c-4874-894c-e553f8b3dae7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.346349 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396562bc-990c-4874-894c-e553f8b3dae7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2h5\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-kube-api-access-cm2h5\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448454 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-config-data\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448519 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448566 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448618 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396562bc-990c-4874-894c-e553f8b3dae7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396562bc-990c-4874-894c-e553f8b3dae7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.448892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.449079 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.449128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.449177 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.450033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-config-data\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.450298 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.450321 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.450385 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.451648 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.451938 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.456131 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.460017 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.460419 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396562bc-990c-4874-894c-e553f8b3dae7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.470746 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396562bc-990c-4874-894c-e553f8b3dae7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.470988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2h5\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-kube-api-access-cm2h5\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.493030 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.515718 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.519360 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.524604 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cr7k9" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.524815 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.525006 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.525119 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.526108 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.526264 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.526414 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.530167 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.594452 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.657949 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/752b9e43-44cd-4526-8393-6ae735497707-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658121 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658191 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658273 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zbnd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-kube-api-access-4zbnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658353 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658407 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658426 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/752b9e43-44cd-4526-8393-6ae735497707-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.658512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.760392 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/752b9e43-44cd-4526-8393-6ae735497707-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.760499 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.760521 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.762949 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763038 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763072 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763090 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zbnd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-kube-api-access-4zbnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763139 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763157 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763174 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/752b9e43-44cd-4526-8393-6ae735497707-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763200 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763218 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.763868 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.764535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.764732 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.768679 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.769022 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/752b9e43-44cd-4526-8393-6ae735497707-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.769082 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.786457 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.786835 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/752b9e43-44cd-4526-8393-6ae735497707-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.787796 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zbnd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-kube-api-access-4zbnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.794054 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:58 crc kubenswrapper[4957]: I1128 21:08:58.872409 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.038094 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.301784 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" event={"ID":"1b34f895-848b-4d42-bacc-04dd981362c9","Type":"ContainerStarted","Data":"bc575d718860854d1fd05102374791d0f72f462938f5a5bbbb0540fd9e7eb774"} Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.303089 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396562bc-990c-4874-894c-e553f8b3dae7","Type":"ContainerStarted","Data":"711e0236c3e879c1850a2f9915f932f923f54bf8a0dd71589341ae8c4e4f5c2e"} Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.361940 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.877185 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.878689 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.880598 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.881587 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.882770 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ctl4c" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.883097 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.889367 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.890167 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992156 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992209 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97270c0-f75e-4695-87b5-2c7cfd08bf02-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992243 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d97270c0-f75e-4695-87b5-2c7cfd08bf02-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992280 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-kolla-config\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992329 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-config-data-default\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992359 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992390 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6sbg\" (UniqueName: \"kubernetes.io/projected/d97270c0-f75e-4695-87b5-2c7cfd08bf02-kube-api-access-q6sbg\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:08:59 crc kubenswrapper[4957]: I1128 21:08:59.992463 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d97270c0-f75e-4695-87b5-2c7cfd08bf02-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094124 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-config-data-default\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094183 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094243 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6sbg\" (UniqueName: \"kubernetes.io/projected/d97270c0-f75e-4695-87b5-2c7cfd08bf02-kube-api-access-q6sbg\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094327 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d97270c0-f75e-4695-87b5-2c7cfd08bf02-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094396 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094420 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97270c0-f75e-4695-87b5-2c7cfd08bf02-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094437 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d97270c0-f75e-4695-87b5-2c7cfd08bf02-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.094548 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.096273 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d97270c0-f75e-4695-87b5-2c7cfd08bf02-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.096316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-kolla-config\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.096900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-kolla-config\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.097246 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.105736 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d97270c0-f75e-4695-87b5-2c7cfd08bf02-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.107014 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d97270c0-f75e-4695-87b5-2c7cfd08bf02-config-data-default\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.118268 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6sbg\" (UniqueName: \"kubernetes.io/projected/d97270c0-f75e-4695-87b5-2c7cfd08bf02-kube-api-access-q6sbg\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.119077 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97270c0-f75e-4695-87b5-2c7cfd08bf02-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.120072 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d97270c0-f75e-4695-87b5-2c7cfd08bf02\") " pod="openstack/openstack-galera-0" Nov 28 21:09:00 crc kubenswrapper[4957]: I1128 21:09:00.211108 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.493916 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.495847 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.502477 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.550509 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.550564 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.550731 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wc4pv" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.550797 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643733 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643784 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643860 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b91aacb-b300-41de-814e-26e73ac93c2e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643894 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b91aacb-b300-41de-814e-26e73ac93c2e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643923 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643961 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmwd\" (UniqueName: \"kubernetes.io/projected/9b91aacb-b300-41de-814e-26e73ac93c2e-kube-api-access-ggmwd\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.643982 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b91aacb-b300-41de-814e-26e73ac93c2e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.644016 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.677650 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.678816 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.681238 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.682521 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.686804 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-pqhsm" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.699930 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745143 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745211 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmwd\" (UniqueName: \"kubernetes.io/projected/9b91aacb-b300-41de-814e-26e73ac93c2e-kube-api-access-ggmwd\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745254 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b91aacb-b300-41de-814e-26e73ac93c2e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745329 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745350 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b91aacb-b300-41de-814e-26e73ac93c2e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.745434 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b91aacb-b300-41de-814e-26e73ac93c2e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.746571 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.746916 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.747077 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b91aacb-b300-41de-814e-26e73ac93c2e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.747800 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.747955 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b91aacb-b300-41de-814e-26e73ac93c2e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.761368 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b91aacb-b300-41de-814e-26e73ac93c2e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.765077 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b91aacb-b300-41de-814e-26e73ac93c2e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.767372 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmwd\" (UniqueName: \"kubernetes.io/projected/9b91aacb-b300-41de-814e-26e73ac93c2e-kube-api-access-ggmwd\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.773636 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9b91aacb-b300-41de-814e-26e73ac93c2e\") " pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.846606 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/179be6ed-b240-4fde-995c-92c72dbd2b02-memcached-tls-certs\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.846706 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179be6ed-b240-4fde-995c-92c72dbd2b02-combined-ca-bundle\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.846876 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/179be6ed-b240-4fde-995c-92c72dbd2b02-config-data\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.846958 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/179be6ed-b240-4fde-995c-92c72dbd2b02-kolla-config\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.847251 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrv7\" (UniqueName: \"kubernetes.io/projected/179be6ed-b240-4fde-995c-92c72dbd2b02-kube-api-access-qnrv7\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.881117 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.949319 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrv7\" (UniqueName: \"kubernetes.io/projected/179be6ed-b240-4fde-995c-92c72dbd2b02-kube-api-access-qnrv7\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.949440 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/179be6ed-b240-4fde-995c-92c72dbd2b02-memcached-tls-certs\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.949521 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179be6ed-b240-4fde-995c-92c72dbd2b02-combined-ca-bundle\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.949551 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/179be6ed-b240-4fde-995c-92c72dbd2b02-config-data\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.949583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/179be6ed-b240-4fde-995c-92c72dbd2b02-kolla-config\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.950605 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/179be6ed-b240-4fde-995c-92c72dbd2b02-kolla-config\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.950842 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/179be6ed-b240-4fde-995c-92c72dbd2b02-config-data\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.953195 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/179be6ed-b240-4fde-995c-92c72dbd2b02-memcached-tls-certs\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.955760 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179be6ed-b240-4fde-995c-92c72dbd2b02-combined-ca-bundle\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:01 crc kubenswrapper[4957]: I1128 21:09:01.970160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrv7\" (UniqueName: \"kubernetes.io/projected/179be6ed-b240-4fde-995c-92c72dbd2b02-kube-api-access-qnrv7\") pod \"memcached-0\" (UID: \"179be6ed-b240-4fde-995c-92c72dbd2b02\") " pod="openstack/memcached-0" Nov 28 21:09:02 crc kubenswrapper[4957]: I1128 21:09:02.003339 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 21:09:02 crc kubenswrapper[4957]: W1128 21:09:02.066486 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752b9e43_44cd_4526_8393_6ae735497707.slice/crio-aff117b965566ad9b7954c5c15b9c8d59efdc1a05429b4c0c417a658edbbba14 WatchSource:0}: Error finding container aff117b965566ad9b7954c5c15b9c8d59efdc1a05429b4c0c417a658edbbba14: Status 404 returned error can't find the container with id aff117b965566ad9b7954c5c15b9c8d59efdc1a05429b4c0c417a658edbbba14 Nov 28 21:09:02 crc kubenswrapper[4957]: I1128 21:09:02.331656 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"752b9e43-44cd-4526-8393-6ae735497707","Type":"ContainerStarted","Data":"aff117b965566ad9b7954c5c15b9c8d59efdc1a05429b4c0c417a658edbbba14"} Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.108800 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.110286 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.114901 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bt5c5" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.145383 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.216381 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btq74\" (UniqueName: \"kubernetes.io/projected/12d0ff1a-6220-432a-bc8a-f611c2e6996d-kube-api-access-btq74\") pod \"kube-state-metrics-0\" (UID: \"12d0ff1a-6220-432a-bc8a-f611c2e6996d\") " pod="openstack/kube-state-metrics-0" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.318166 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btq74\" (UniqueName: \"kubernetes.io/projected/12d0ff1a-6220-432a-bc8a-f611c2e6996d-kube-api-access-btq74\") pod \"kube-state-metrics-0\" (UID: \"12d0ff1a-6220-432a-bc8a-f611c2e6996d\") " pod="openstack/kube-state-metrics-0" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.343162 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btq74\" (UniqueName: \"kubernetes.io/projected/12d0ff1a-6220-432a-bc8a-f611c2e6996d-kube-api-access-btq74\") pod \"kube-state-metrics-0\" (UID: \"12d0ff1a-6220-432a-bc8a-f611c2e6996d\") " pod="openstack/kube-state-metrics-0" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.443293 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.696991 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d"] Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.698150 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.703124 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.703887 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-zv5wc" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.720042 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d"] Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.828731 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2w6\" (UniqueName: \"kubernetes.io/projected/72d57747-268e-40db-85cc-98d5ed48a55f-kube-api-access-dk2w6\") pod \"observability-ui-dashboards-7d5fb4cbfb-bhd2d\" (UID: \"72d57747-268e-40db-85cc-98d5ed48a55f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.829054 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d57747-268e-40db-85cc-98d5ed48a55f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-bhd2d\" (UID: \"72d57747-268e-40db-85cc-98d5ed48a55f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.931559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk2w6\" (UniqueName: \"kubernetes.io/projected/72d57747-268e-40db-85cc-98d5ed48a55f-kube-api-access-dk2w6\") pod \"observability-ui-dashboards-7d5fb4cbfb-bhd2d\" (UID: \"72d57747-268e-40db-85cc-98d5ed48a55f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.931682 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d57747-268e-40db-85cc-98d5ed48a55f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-bhd2d\" (UID: \"72d57747-268e-40db-85cc-98d5ed48a55f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.972125 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk2w6\" (UniqueName: \"kubernetes.io/projected/72d57747-268e-40db-85cc-98d5ed48a55f-kube-api-access-dk2w6\") pod \"observability-ui-dashboards-7d5fb4cbfb-bhd2d\" (UID: \"72d57747-268e-40db-85cc-98d5ed48a55f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:04 crc kubenswrapper[4957]: I1128 21:09:04.976957 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d57747-268e-40db-85cc-98d5ed48a55f-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-bhd2d\" (UID: \"72d57747-268e-40db-85cc-98d5ed48a55f\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.035884 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.086976 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b7bbc7d5d-rczg7"] Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.089721 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.140276 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b7bbc7d5d-rczg7"] Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245231 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-trusted-ca-bundle\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245282 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-service-ca\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245301 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6v8\" (UniqueName: \"kubernetes.io/projected/c1993c43-4f51-4d8d-8e16-394099cc5b15-kube-api-access-vb6v8\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245366 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-oauth-config\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245399 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-oauth-serving-cert\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245424 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-config\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.245449 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-serving-cert\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.327716 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.344156 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.344273 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348268 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-oauth-config\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348329 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-oauth-serving-cert\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348360 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-config\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348388 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-serving-cert\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348456 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-trusted-ca-bundle\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348474 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-service-ca\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.348498 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6v8\" (UniqueName: \"kubernetes.io/projected/c1993c43-4f51-4d8d-8e16-394099cc5b15-kube-api-access-vb6v8\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.352251 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-oauth-serving-cert\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.352843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-config\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.353566 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-service-ca\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.354615 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1993c43-4f51-4d8d-8e16-394099cc5b15-trusted-ca-bundle\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.355008 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-serving-cert\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.355023 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c1993c43-4f51-4d8d-8e16-394099cc5b15-console-oauth-config\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.355241 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.355614 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.355726 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.381502 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.381583 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vg8pc" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.384404 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.403266 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6v8\" (UniqueName: \"kubernetes.io/projected/c1993c43-4f51-4d8d-8e16-394099cc5b15-kube-api-access-vb6v8\") pod \"console-7b7bbc7d5d-rczg7\" (UID: \"c1993c43-4f51-4d8d-8e16-394099cc5b15\") " pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.416532 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452538 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0cf3b067-6d8a-4d74-8c8c-285536f779e9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452587 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452616 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452671 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452724 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.452750 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tss9v\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-kube-api-access-tss9v\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554107 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554173 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554242 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554264 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tss9v\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-kube-api-access-tss9v\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554325 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554366 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0cf3b067-6d8a-4d74-8c8c-285536f779e9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.554406 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.555508 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.556289 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0cf3b067-6d8a-4d74-8c8c-285536f779e9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.557646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.559810 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.559857 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.559999 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.561202 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.576909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tss9v\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-kube-api-access-tss9v\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.586402 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:05 crc kubenswrapper[4957]: I1128 21:09:05.770529 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.518988 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dzt8d"] Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.520875 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.524127 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-cncxx" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.524318 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.524319 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.544912 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cd25j"] Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.546919 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.565096 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dzt8d"] Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.579568 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cd25j"] Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606106 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-log-ovn\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606171 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-combined-ca-bundle\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-ovn-controller-tls-certs\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606287 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-run\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606316 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-lib\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606359 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-run-ovn\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606379 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-scripts\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606472 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvsx\" (UniqueName: \"kubernetes.io/projected/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-kube-api-access-5bvsx\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606563 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-etc-ovs\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606669 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfxj\" (UniqueName: \"kubernetes.io/projected/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-kube-api-access-vgfxj\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606703 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-run\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606719 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-scripts\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.606754 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-log\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.640845 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.642951 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.644769 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.645891 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.646154 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.646772 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p9tbz" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.646169 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.651430 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-run-ovn\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708520 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-scripts\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708550 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvsx\" (UniqueName: \"kubernetes.io/projected/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-kube-api-access-5bvsx\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-etc-ovs\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708688 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708715 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d137d00-b823-4d67-a158-71e84c6d2c6b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708739 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgfxj\" (UniqueName: \"kubernetes.io/projected/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-kube-api-access-vgfxj\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708762 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-run\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.708809 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-scripts\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709062 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-log\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709111 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d137d00-b823-4d67-a158-71e84c6d2c6b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709174 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-log-ovn\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709195 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sp6\" (UniqueName: \"kubernetes.io/projected/3d137d00-b823-4d67-a158-71e84c6d2c6b-kube-api-access-49sp6\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709230 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-combined-ca-bundle\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709249 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d137d00-b823-4d67-a158-71e84c6d2c6b-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709291 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-ovn-controller-tls-certs\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709319 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-run\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709401 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-run\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709603 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-etc-ovs\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709767 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-log-ovn\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.709897 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-log\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.711272 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.711322 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-lib\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.711360 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.711463 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-run\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.711712 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-var-lib\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.713731 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-scripts\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.714679 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-scripts\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.715847 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-ovn-controller-tls-certs\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.716766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-var-run-ovn\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.724563 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-combined-ca-bundle\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.724725 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvsx\" (UniqueName: \"kubernetes.io/projected/6bc9960f-fdff-42fa-8cdd-4ec0d88f359d-kube-api-access-5bvsx\") pod \"ovn-controller-dzt8d\" (UID: \"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d\") " pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.725344 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgfxj\" (UniqueName: \"kubernetes.io/projected/8edb774a-3d8c-4b9f-b9ca-febeb68d14bf-kube-api-access-vgfxj\") pod \"ovn-controller-ovs-cd25j\" (UID: \"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf\") " pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.813751 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.814318 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.814375 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.814460 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d137d00-b823-4d67-a158-71e84c6d2c6b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.815428 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d137d00-b823-4d67-a158-71e84c6d2c6b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.815668 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49sp6\" (UniqueName: \"kubernetes.io/projected/3d137d00-b823-4d67-a158-71e84c6d2c6b-kube-api-access-49sp6\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.815785 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d137d00-b823-4d67-a158-71e84c6d2c6b-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.816018 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.816454 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.817061 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d137d00-b823-4d67-a158-71e84c6d2c6b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.818066 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d137d00-b823-4d67-a158-71e84c6d2c6b-config\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.816473 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d137d00-b823-4d67-a158-71e84c6d2c6b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.818088 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.821368 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.831450 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d137d00-b823-4d67-a158-71e84c6d2c6b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.833984 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sp6\" (UniqueName: \"kubernetes.io/projected/3d137d00-b823-4d67-a158-71e84c6d2c6b-kube-api-access-49sp6\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.846771 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3d137d00-b823-4d67-a158-71e84c6d2c6b\") " pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.855155 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.869241 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:07 crc kubenswrapper[4957]: I1128 21:09:07.975884 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.709437 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.712075 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.713982 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.714167 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.714179 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.715844 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7r8tp" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.718561 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779562 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779617 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-config\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779680 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779739 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6xd2\" (UniqueName: \"kubernetes.io/projected/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-kube-api-access-h6xd2\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779814 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.779837 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.780323 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.881590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.881639 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.881678 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.881962 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.882034 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.882055 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-config\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.882074 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.882612 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.882681 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6xd2\" (UniqueName: \"kubernetes.io/projected/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-kube-api-access-h6xd2\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.883119 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.883452 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-config\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.888539 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.890020 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.892458 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.896182 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.905973 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6xd2\" (UniqueName: \"kubernetes.io/projected/4f513a2d-d752-44ee-b02c-e7f3dcb3945d-kube-api-access-h6xd2\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:10 crc kubenswrapper[4957]: I1128 21:09:10.910368 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4f513a2d-d752-44ee-b02c-e7f3dcb3945d\") " pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:11 crc kubenswrapper[4957]: I1128 21:09:11.086569 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:13 crc kubenswrapper[4957]: E1128 21:09:13.073353 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 21:09:13 crc kubenswrapper[4957]: E1128 21:09:13.073869 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l52zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-65gdn_openstack(87d56d73-d2b5-4eb2-9f26-c52642b5000b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:09:13 crc kubenswrapper[4957]: E1128 21:09:13.075012 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" podUID="87d56d73-d2b5-4eb2-9f26-c52642b5000b" Nov 28 21:09:13 crc kubenswrapper[4957]: E1128 21:09:13.160673 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 21:09:13 crc kubenswrapper[4957]: E1128 21:09:13.166313 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4fjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vmccd_openstack(0305702e-2536-44cf-bc5d-fc3bea847db0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:09:13 crc kubenswrapper[4957]: E1128 21:09:13.167486 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" podUID="0305702e-2536-44cf-bc5d-fc3bea847db0" Nov 28 21:09:13 crc kubenswrapper[4957]: I1128 21:09:13.585228 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 21:09:14 crc kubenswrapper[4957]: W1128 21:09:14.376405 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97270c0_f75e_4695_87b5_2c7cfd08bf02.slice/crio-d803fb4cdd1a402b54b549dce62d95576342d6971cfd2d3e2389bd20a4c56c4f WatchSource:0}: Error finding container d803fb4cdd1a402b54b549dce62d95576342d6971cfd2d3e2389bd20a4c56c4f: Status 404 returned error can't find the container with id d803fb4cdd1a402b54b549dce62d95576342d6971cfd2d3e2389bd20a4c56c4f Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.458232 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d97270c0-f75e-4695-87b5-2c7cfd08bf02","Type":"ContainerStarted","Data":"d803fb4cdd1a402b54b549dce62d95576342d6971cfd2d3e2389bd20a4c56c4f"} Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.464205 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" event={"ID":"0305702e-2536-44cf-bc5d-fc3bea847db0","Type":"ContainerDied","Data":"cba4f836370a12403fa8571bc32fc2712abe29b3226a1a906759e7cd01883266"} Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.464276 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cba4f836370a12403fa8571bc32fc2712abe29b3226a1a906759e7cd01883266" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.491085 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" event={"ID":"87d56d73-d2b5-4eb2-9f26-c52642b5000b","Type":"ContainerDied","Data":"a9e46c438740ea9adefa0814ea1fea2e31abaa91e2ccc497e073c659ca990565"} Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.491118 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e46c438740ea9adefa0814ea1fea2e31abaa91e2ccc497e073c659ca990565" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.662243 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.759358 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.771495 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l52zq\" (UniqueName: \"kubernetes.io/projected/87d56d73-d2b5-4eb2-9f26-c52642b5000b-kube-api-access-l52zq\") pod \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.771687 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d56d73-d2b5-4eb2-9f26-c52642b5000b-config\") pod \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\" (UID: \"87d56d73-d2b5-4eb2-9f26-c52642b5000b\") " Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.771721 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-config\") pod \"0305702e-2536-44cf-bc5d-fc3bea847db0\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.771858 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-dns-svc\") pod \"0305702e-2536-44cf-bc5d-fc3bea847db0\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.771927 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4fjl\" (UniqueName: \"kubernetes.io/projected/0305702e-2536-44cf-bc5d-fc3bea847db0-kube-api-access-p4fjl\") pod \"0305702e-2536-44cf-bc5d-fc3bea847db0\" (UID: \"0305702e-2536-44cf-bc5d-fc3bea847db0\") " Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.773068 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87d56d73-d2b5-4eb2-9f26-c52642b5000b-config" (OuterVolumeSpecName: "config") pod "87d56d73-d2b5-4eb2-9f26-c52642b5000b" (UID: "87d56d73-d2b5-4eb2-9f26-c52642b5000b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.773535 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-config" (OuterVolumeSpecName: "config") pod "0305702e-2536-44cf-bc5d-fc3bea847db0" (UID: "0305702e-2536-44cf-bc5d-fc3bea847db0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.773773 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0305702e-2536-44cf-bc5d-fc3bea847db0" (UID: "0305702e-2536-44cf-bc5d-fc3bea847db0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.778189 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0305702e-2536-44cf-bc5d-fc3bea847db0-kube-api-access-p4fjl" (OuterVolumeSpecName: "kube-api-access-p4fjl") pod "0305702e-2536-44cf-bc5d-fc3bea847db0" (UID: "0305702e-2536-44cf-bc5d-fc3bea847db0"). InnerVolumeSpecName "kube-api-access-p4fjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.778882 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d56d73-d2b5-4eb2-9f26-c52642b5000b-kube-api-access-l52zq" (OuterVolumeSpecName: "kube-api-access-l52zq") pod "87d56d73-d2b5-4eb2-9f26-c52642b5000b" (UID: "87d56d73-d2b5-4eb2-9f26-c52642b5000b"). InnerVolumeSpecName "kube-api-access-l52zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.875838 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l52zq\" (UniqueName: \"kubernetes.io/projected/87d56d73-d2b5-4eb2-9f26-c52642b5000b-kube-api-access-l52zq\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.875866 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d56d73-d2b5-4eb2-9f26-c52642b5000b-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.875877 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.875884 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0305702e-2536-44cf-bc5d-fc3bea847db0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:14 crc kubenswrapper[4957]: I1128 21:09:14.875895 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4fjl\" (UniqueName: \"kubernetes.io/projected/0305702e-2536-44cf-bc5d-fc3bea847db0-kube-api-access-p4fjl\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:15 crc kubenswrapper[4957]: W1128 21:09:15.069742 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cf3b067_6d8a_4d74_8c8c_285536f779e9.slice/crio-0551abe995c9d76e9fc5b49d6fbfa874d004b6d19af95d73f8d61ebc0b4faf8d WatchSource:0}: Error finding container 0551abe995c9d76e9fc5b49d6fbfa874d004b6d19af95d73f8d61ebc0b4faf8d: Status 404 returned error can't find the container with id 0551abe995c9d76e9fc5b49d6fbfa874d004b6d19af95d73f8d61ebc0b4faf8d Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.071614 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.276373 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dzt8d"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.288435 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cd25j"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.508518 4957 generic.go:334] "Generic (PLEG): container finished" podID="1b34f895-848b-4d42-bacc-04dd981362c9" containerID="ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76" exitCode=0 Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.508600 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" event={"ID":"1b34f895-848b-4d42-bacc-04dd981362c9","Type":"ContainerDied","Data":"ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76"} Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.517168 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dzt8d" event={"ID":"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d","Type":"ContainerStarted","Data":"8d9ad10ffcdecccea19174508def0d5c1918f781233c91901254fcc6f63b816a"} Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.523392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerStarted","Data":"0551abe995c9d76e9fc5b49d6fbfa874d004b6d19af95d73f8d61ebc0b4faf8d"} Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.524996 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cd25j" event={"ID":"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf","Type":"ContainerStarted","Data":"85976dfd418ddb8da2957a15bd9633636f7cecf91769ec2c5283c11e2b3248e2"} Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.526414 4957 generic.go:334] "Generic (PLEG): container finished" podID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerID="18089aaf21c3f63c8c88a06ab4d35a9795c50c0fb3d4d5fb5028f3477ca06d40" exitCode=0 Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.526506 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-65gdn" Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.527438 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" event={"ID":"901e1c94-d356-4ed1-998a-a7910a1e0510","Type":"ContainerDied","Data":"18089aaf21c3f63c8c88a06ab4d35a9795c50c0fb3d4d5fb5028f3477ca06d40"} Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.527473 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vmccd" Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.533465 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.543183 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b7bbc7d5d-rczg7"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.551308 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 21:09:15 crc kubenswrapper[4957]: W1128 21:09:15.618421 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1993c43_4f51_4d8d_8e16_394099cc5b15.slice/crio-627ce8356f3c47b509867f7ffe373b656c7e71d97a40ad10fec9cb9af309ceca WatchSource:0}: Error finding container 627ce8356f3c47b509867f7ffe373b656c7e71d97a40ad10fec9cb9af309ceca: Status 404 returned error can't find the container with id 627ce8356f3c47b509867f7ffe373b656c7e71d97a40ad10fec9cb9af309ceca Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.630966 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vmccd"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.640508 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vmccd"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.683773 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-65gdn"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.697892 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-65gdn"] Nov 28 21:09:15 crc kubenswrapper[4957]: W1128 21:09:15.699709 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f513a2d_d752_44ee_b02c_e7f3dcb3945d.slice/crio-1140ba954e75797ad4e2593b4375bbed3675552f85ef013b641c584881313d97 WatchSource:0}: Error finding container 1140ba954e75797ad4e2593b4375bbed3675552f85ef013b641c584881313d97: Status 404 returned error can't find the container with id 1140ba954e75797ad4e2593b4375bbed3675552f85ef013b641c584881313d97 Nov 28 21:09:15 crc kubenswrapper[4957]: W1128 21:09:15.704572 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d57747_268e_40db_85cc_98d5ed48a55f.slice/crio-2261b2218a4124d51535cb2122366666e5b741b1aa73ae263bb8d8789c233190 WatchSource:0}: Error finding container 2261b2218a4124d51535cb2122366666e5b741b1aa73ae263bb8d8789c233190: Status 404 returned error can't find the container with id 2261b2218a4124d51535cb2122366666e5b741b1aa73ae263bb8d8789c233190 Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.711867 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 21:09:15 crc kubenswrapper[4957]: W1128 21:09:15.718826 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b91aacb_b300_41de_814e_26e73ac93c2e.slice/crio-2d9267455279ea273e848126c8ab3b6b8cea940fba30650f01fcfbc12a38eb81 WatchSource:0}: Error finding container 2d9267455279ea273e848126c8ab3b6b8cea940fba30650f01fcfbc12a38eb81: Status 404 returned error can't find the container with id 2d9267455279ea273e848126c8ab3b6b8cea940fba30650f01fcfbc12a38eb81 Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.719696 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d"] Nov 28 21:09:15 crc kubenswrapper[4957]: I1128 21:09:15.728782 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.545227 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9b91aacb-b300-41de-814e-26e73ac93c2e","Type":"ContainerStarted","Data":"2d9267455279ea273e848126c8ab3b6b8cea940fba30650f01fcfbc12a38eb81"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.547655 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b7bbc7d5d-rczg7" event={"ID":"c1993c43-4f51-4d8d-8e16-394099cc5b15","Type":"ContainerStarted","Data":"1c7b6e13dc27d43840044f28d41585914ab0b547637723eee9b3ed1b7e4f4048"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.547683 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b7bbc7d5d-rczg7" event={"ID":"c1993c43-4f51-4d8d-8e16-394099cc5b15","Type":"ContainerStarted","Data":"627ce8356f3c47b509867f7ffe373b656c7e71d97a40ad10fec9cb9af309ceca"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.551233 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" event={"ID":"1b34f895-848b-4d42-bacc-04dd981362c9","Type":"ContainerStarted","Data":"64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.551373 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.570477 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4f513a2d-d752-44ee-b02c-e7f3dcb3945d","Type":"ContainerStarted","Data":"1140ba954e75797ad4e2593b4375bbed3675552f85ef013b641c584881313d97"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.579790 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b7bbc7d5d-rczg7" podStartSLOduration=11.579770478 podStartE2EDuration="11.579770478s" podCreationTimestamp="2025-11-28 21:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:09:16.575625228 +0000 UTC m=+1196.044273137" watchObservedRunningTime="2025-11-28 21:09:16.579770478 +0000 UTC m=+1196.048418387" Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.595501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" event={"ID":"72d57747-268e-40db-85cc-98d5ed48a55f","Type":"ContainerStarted","Data":"2261b2218a4124d51535cb2122366666e5b741b1aa73ae263bb8d8789c233190"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.596835 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"179be6ed-b240-4fde-995c-92c72dbd2b02","Type":"ContainerStarted","Data":"9e2da4a6a4633b6df65815018440afdff855b42df8a909a8f468ff99ade7ed10"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.599482 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"752b9e43-44cd-4526-8393-6ae735497707","Type":"ContainerStarted","Data":"dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.601667 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12d0ff1a-6220-432a-bc8a-f611c2e6996d","Type":"ContainerStarted","Data":"4e8815ddd5db6aa9aa4e8207485727867f23077b06d284f70d2f2dda2ac55399"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.608014 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" event={"ID":"901e1c94-d356-4ed1-998a-a7910a1e0510","Type":"ContainerStarted","Data":"9d2402545cc0c9052e30f763184c4008f8ed3d0534edc78a0f835622f92d19b9"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.609968 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.612598 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" podStartSLOduration=3.436578815 podStartE2EDuration="19.612579354s" podCreationTimestamp="2025-11-28 21:08:57 +0000 UTC" firstStartedPulling="2025-11-28 21:08:58.303061119 +0000 UTC m=+1177.771709018" lastFinishedPulling="2025-11-28 21:09:14.479061648 +0000 UTC m=+1193.947709557" observedRunningTime="2025-11-28 21:09:16.606250761 +0000 UTC m=+1196.074898670" watchObservedRunningTime="2025-11-28 21:09:16.612579354 +0000 UTC m=+1196.081227263" Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.631422 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396562bc-990c-4874-894c-e553f8b3dae7","Type":"ContainerStarted","Data":"effcb0c3cd0c8a3dfd159e80c73618f8f4a23a27ca559dd7532e8f835c678840"} Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.654727 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" podStartSLOduration=3.126209946 podStartE2EDuration="19.654705416s" podCreationTimestamp="2025-11-28 21:08:57 +0000 UTC" firstStartedPulling="2025-11-28 21:08:58.005429699 +0000 UTC m=+1177.474077608" lastFinishedPulling="2025-11-28 21:09:14.533925169 +0000 UTC m=+1194.002573078" observedRunningTime="2025-11-28 21:09:16.650845123 +0000 UTC m=+1196.119493052" watchObservedRunningTime="2025-11-28 21:09:16.654705416 +0000 UTC m=+1196.123353325" Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.675627 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.850242 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0305702e-2536-44cf-bc5d-fc3bea847db0" path="/var/lib/kubelet/pods/0305702e-2536-44cf-bc5d-fc3bea847db0/volumes" Nov 28 21:09:16 crc kubenswrapper[4957]: I1128 21:09:16.852453 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d56d73-d2b5-4eb2-9f26-c52642b5000b" path="/var/lib/kubelet/pods/87d56d73-d2b5-4eb2-9f26-c52642b5000b/volumes" Nov 28 21:09:17 crc kubenswrapper[4957]: W1128 21:09:17.354499 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d137d00_b823_4d67_a158_71e84c6d2c6b.slice/crio-f6e202ff1a7f4a06a49add1873c338275c4dbf39a6a0f01374f5bd152929a892 WatchSource:0}: Error finding container f6e202ff1a7f4a06a49add1873c338275c4dbf39a6a0f01374f5bd152929a892: Status 404 returned error can't find the container with id f6e202ff1a7f4a06a49add1873c338275c4dbf39a6a0f01374f5bd152929a892 Nov 28 21:09:17 crc kubenswrapper[4957]: I1128 21:09:17.641817 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d137d00-b823-4d67-a158-71e84c6d2c6b","Type":"ContainerStarted","Data":"f6e202ff1a7f4a06a49add1873c338275c4dbf39a6a0f01374f5bd152929a892"} Nov 28 21:09:22 crc kubenswrapper[4957]: I1128 21:09:22.435378 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:09:22 crc kubenswrapper[4957]: I1128 21:09:22.734370 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:09:22 crc kubenswrapper[4957]: I1128 21:09:22.784649 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99cfd"] Nov 28 21:09:22 crc kubenswrapper[4957]: I1128 21:09:22.784871 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerName="dnsmasq-dns" containerID="cri-o://9d2402545cc0c9052e30f763184c4008f8ed3d0534edc78a0f835622f92d19b9" gracePeriod=10 Nov 28 21:09:23 crc kubenswrapper[4957]: I1128 21:09:23.697350 4957 generic.go:334] "Generic (PLEG): container finished" podID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerID="9d2402545cc0c9052e30f763184c4008f8ed3d0534edc78a0f835622f92d19b9" exitCode=0 Nov 28 21:09:23 crc kubenswrapper[4957]: I1128 21:09:23.697431 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" event={"ID":"901e1c94-d356-4ed1-998a-a7910a1e0510","Type":"ContainerDied","Data":"9d2402545cc0c9052e30f763184c4008f8ed3d0534edc78a0f835622f92d19b9"} Nov 28 21:09:23 crc kubenswrapper[4957]: I1128 21:09:23.977390 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.092623 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7f4m\" (UniqueName: \"kubernetes.io/projected/901e1c94-d356-4ed1-998a-a7910a1e0510-kube-api-access-l7f4m\") pod \"901e1c94-d356-4ed1-998a-a7910a1e0510\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.092726 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-config\") pod \"901e1c94-d356-4ed1-998a-a7910a1e0510\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.092791 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-dns-svc\") pod \"901e1c94-d356-4ed1-998a-a7910a1e0510\" (UID: \"901e1c94-d356-4ed1-998a-a7910a1e0510\") " Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.096974 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901e1c94-d356-4ed1-998a-a7910a1e0510-kube-api-access-l7f4m" (OuterVolumeSpecName: "kube-api-access-l7f4m") pod "901e1c94-d356-4ed1-998a-a7910a1e0510" (UID: "901e1c94-d356-4ed1-998a-a7910a1e0510"). InnerVolumeSpecName "kube-api-access-l7f4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.141406 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "901e1c94-d356-4ed1-998a-a7910a1e0510" (UID: "901e1c94-d356-4ed1-998a-a7910a1e0510"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.141437 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-config" (OuterVolumeSpecName: "config") pod "901e1c94-d356-4ed1-998a-a7910a1e0510" (UID: "901e1c94-d356-4ed1-998a-a7910a1e0510"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.195001 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.195262 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7f4m\" (UniqueName: \"kubernetes.io/projected/901e1c94-d356-4ed1-998a-a7910a1e0510-kube-api-access-l7f4m\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.195348 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e1c94-d356-4ed1-998a-a7910a1e0510-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.708312 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" event={"ID":"901e1c94-d356-4ed1-998a-a7910a1e0510","Type":"ContainerDied","Data":"a1b4c5550818481a24d12d75afbf855b29e8d32c916ba1b660ff7a638f41ac14"} Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.708654 4957 scope.go:117] "RemoveContainer" containerID="9d2402545cc0c9052e30f763184c4008f8ed3d0534edc78a0f835622f92d19b9" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.708344 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99cfd" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.710488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4f513a2d-d752-44ee-b02c-e7f3dcb3945d","Type":"ContainerStarted","Data":"0d968644bfa03f9fb4d1a5b0b813dbea761eeb188fc2f47c2f4e129498f191a5"} Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.725084 4957 scope.go:117] "RemoveContainer" containerID="18089aaf21c3f63c8c88a06ab4d35a9795c50c0fb3d4d5fb5028f3477ca06d40" Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.742935 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99cfd"] Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.750060 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99cfd"] Nov 28 21:09:24 crc kubenswrapper[4957]: I1128 21:09:24.824290 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" path="/var/lib/kubelet/pods/901e1c94-d356-4ed1-998a-a7910a1e0510/volumes" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.418009 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.418331 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.423036 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.736863 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d97270c0-f75e-4695-87b5-2c7cfd08bf02","Type":"ContainerStarted","Data":"c23fa27416759592380296e7db7d30a4a76d286bff3492eaf2b9bb61ee03c1c8"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.739748 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9b91aacb-b300-41de-814e-26e73ac93c2e","Type":"ContainerStarted","Data":"05e8e8d66bafe9f2268261deea253c33b58380b955e9b39f31632fa0533cc2d0"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.743845 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cd25j" event={"ID":"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf","Type":"ContainerStarted","Data":"7d02d9a720a9c751291cc124fb2a2eb84b9d84ad5b4081716386e27b3cfa9fb5"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.746291 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d137d00-b823-4d67-a158-71e84c6d2c6b","Type":"ContainerStarted","Data":"ef69198c9878cd6fae0ae42c99eba2dc220f0bb91938c1bf1166ce28ffc98ac1"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.750570 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" event={"ID":"72d57747-268e-40db-85cc-98d5ed48a55f","Type":"ContainerStarted","Data":"35008ee863dd26ef845a23fea8a9e4749cd0863c0fa0f20e2cd2f2ce1cf67314"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.752954 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dzt8d" event={"ID":"6bc9960f-fdff-42fa-8cdd-4ec0d88f359d","Type":"ContainerStarted","Data":"74a01f48e5092d7ffca2e163c929960453a0133439a80282443d45f1af5de8a0"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.753103 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dzt8d" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.759158 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"179be6ed-b240-4fde-995c-92c72dbd2b02","Type":"ContainerStarted","Data":"c9fbc1f6af2217b039fccd53631b453cbb31d176ecdb36a64348b3cbb98dbe94"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.759891 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.766159 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12d0ff1a-6220-432a-bc8a-f611c2e6996d","Type":"ContainerStarted","Data":"7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e"} Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.766788 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.772105 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b7bbc7d5d-rczg7" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.792261 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dzt8d" podStartSLOduration=10.418206957 podStartE2EDuration="18.792243876s" podCreationTimestamp="2025-11-28 21:09:07 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.28337921 +0000 UTC m=+1194.752027119" lastFinishedPulling="2025-11-28 21:09:23.657416129 +0000 UTC m=+1203.126064038" observedRunningTime="2025-11-28 21:09:25.777936139 +0000 UTC m=+1205.246584048" watchObservedRunningTime="2025-11-28 21:09:25.792243876 +0000 UTC m=+1205.260891785" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.851971 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-bhd2d" podStartSLOduration=13.968771103 podStartE2EDuration="21.851950975s" podCreationTimestamp="2025-11-28 21:09:04 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.713242168 +0000 UTC m=+1195.181890067" lastFinishedPulling="2025-11-28 21:09:23.59642203 +0000 UTC m=+1203.065069939" observedRunningTime="2025-11-28 21:09:25.839752499 +0000 UTC m=+1205.308400418" watchObservedRunningTime="2025-11-28 21:09:25.851950975 +0000 UTC m=+1205.320598884" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.937749 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.599321378 podStartE2EDuration="24.937734185s" podCreationTimestamp="2025-11-28 21:09:01 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.659461073 +0000 UTC m=+1195.128108982" lastFinishedPulling="2025-11-28 21:09:22.99787387 +0000 UTC m=+1202.466521789" observedRunningTime="2025-11-28 21:09:25.910765761 +0000 UTC m=+1205.379413670" watchObservedRunningTime="2025-11-28 21:09:25.937734185 +0000 UTC m=+1205.406382094" Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.937871 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-ddc747d85-tg9cf"] Nov 28 21:09:25 crc kubenswrapper[4957]: I1128 21:09:25.973439 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.461865775 podStartE2EDuration="21.973423041s" podCreationTimestamp="2025-11-28 21:09:04 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.622564848 +0000 UTC m=+1195.091212757" lastFinishedPulling="2025-11-28 21:09:24.134122114 +0000 UTC m=+1203.602770023" observedRunningTime="2025-11-28 21:09:25.926238617 +0000 UTC m=+1205.394886526" watchObservedRunningTime="2025-11-28 21:09:25.973423041 +0000 UTC m=+1205.442070950" Nov 28 21:09:26 crc kubenswrapper[4957]: I1128 21:09:26.781739 4957 generic.go:334] "Generic (PLEG): container finished" podID="8edb774a-3d8c-4b9f-b9ca-febeb68d14bf" containerID="7d02d9a720a9c751291cc124fb2a2eb84b9d84ad5b4081716386e27b3cfa9fb5" exitCode=0 Nov 28 21:09:26 crc kubenswrapper[4957]: I1128 21:09:26.781791 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cd25j" event={"ID":"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf","Type":"ContainerDied","Data":"7d02d9a720a9c751291cc124fb2a2eb84b9d84ad5b4081716386e27b3cfa9fb5"} Nov 28 21:09:28 crc kubenswrapper[4957]: I1128 21:09:28.801440 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cd25j" event={"ID":"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf","Type":"ContainerStarted","Data":"1ef4a18c65dab52d6a2e3fc7db415118a14da8215458419f509a9bd3adcb7943"} Nov 28 21:09:28 crc kubenswrapper[4957]: I1128 21:09:28.801881 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cd25j" event={"ID":"8edb774a-3d8c-4b9f-b9ca-febeb68d14bf","Type":"ContainerStarted","Data":"a822b4571ce905a37d0785026de31eb5ef2099fe38cbe83755eb5c6163cccd27"} Nov 28 21:09:28 crc kubenswrapper[4957]: I1128 21:09:28.801900 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:28 crc kubenswrapper[4957]: I1128 21:09:28.801911 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:09:28 crc kubenswrapper[4957]: I1128 21:09:28.805222 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerStarted","Data":"ccab034b7fe1b003bb22b8809867467c6fa2bdf31a6b078700099887371b30d7"} Nov 28 21:09:28 crc kubenswrapper[4957]: I1128 21:09:28.834624 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cd25j" podStartSLOduration=14.120503938 podStartE2EDuration="21.834598168s" podCreationTimestamp="2025-11-28 21:09:07 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.283738469 +0000 UTC m=+1194.752386378" lastFinishedPulling="2025-11-28 21:09:22.997832699 +0000 UTC m=+1202.466480608" observedRunningTime="2025-11-28 21:09:28.824057932 +0000 UTC m=+1208.292705841" watchObservedRunningTime="2025-11-28 21:09:28.834598168 +0000 UTC m=+1208.303246077" Nov 28 21:09:30 crc kubenswrapper[4957]: I1128 21:09:30.835188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3d137d00-b823-4d67-a158-71e84c6d2c6b","Type":"ContainerStarted","Data":"10116719367f04308acef6c8f0ddce5dce7cf24d9d7aee942328ffdd09993043"} Nov 28 21:09:30 crc kubenswrapper[4957]: I1128 21:09:30.838824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4f513a2d-d752-44ee-b02c-e7f3dcb3945d","Type":"ContainerStarted","Data":"7302e9a49b492d6df0c0ec2304d511528c3ed54be5d7f1028c314e37fcb44cef"} Nov 28 21:09:30 crc kubenswrapper[4957]: I1128 21:09:30.866268 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.118785233 podStartE2EDuration="24.866251033s" podCreationTimestamp="2025-11-28 21:09:06 +0000 UTC" firstStartedPulling="2025-11-28 21:09:17.359156215 +0000 UTC m=+1196.827804124" lastFinishedPulling="2025-11-28 21:09:30.106621995 +0000 UTC m=+1209.575269924" observedRunningTime="2025-11-28 21:09:30.866072929 +0000 UTC m=+1210.334720858" watchObservedRunningTime="2025-11-28 21:09:30.866251033 +0000 UTC m=+1210.334898942" Nov 28 21:09:30 crc kubenswrapper[4957]: I1128 21:09:30.897058 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.503994211 podStartE2EDuration="21.89704065s" podCreationTimestamp="2025-11-28 21:09:09 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.713087334 +0000 UTC m=+1195.181735243" lastFinishedPulling="2025-11-28 21:09:30.106133773 +0000 UTC m=+1209.574781682" observedRunningTime="2025-11-28 21:09:30.889344973 +0000 UTC m=+1210.357992922" watchObservedRunningTime="2025-11-28 21:09:30.89704065 +0000 UTC m=+1210.365688559" Nov 28 21:09:31 crc kubenswrapper[4957]: I1128 21:09:31.087526 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:31 crc kubenswrapper[4957]: I1128 21:09:31.976743 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.004405 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.037104 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.087139 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.144863 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.872675 4957 generic.go:334] "Generic (PLEG): container finished" podID="d97270c0-f75e-4695-87b5-2c7cfd08bf02" containerID="c23fa27416759592380296e7db7d30a4a76d286bff3492eaf2b9bb61ee03c1c8" exitCode=0 Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.872750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d97270c0-f75e-4695-87b5-2c7cfd08bf02","Type":"ContainerDied","Data":"c23fa27416759592380296e7db7d30a4a76d286bff3492eaf2b9bb61ee03c1c8"} Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.874687 4957 generic.go:334] "Generic (PLEG): container finished" podID="9b91aacb-b300-41de-814e-26e73ac93c2e" containerID="05e8e8d66bafe9f2268261deea253c33b58380b955e9b39f31632fa0533cc2d0" exitCode=0 Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.874786 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9b91aacb-b300-41de-814e-26e73ac93c2e","Type":"ContainerDied","Data":"05e8e8d66bafe9f2268261deea253c33b58380b955e9b39f31632fa0533cc2d0"} Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.875421 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.934914 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 21:09:32 crc kubenswrapper[4957]: I1128 21:09:32.937397 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.130542 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-rlxcs"] Nov 28 21:09:33 crc kubenswrapper[4957]: E1128 21:09:33.131014 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerName="init" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.131029 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerName="init" Nov 28 21:09:33 crc kubenswrapper[4957]: E1128 21:09:33.131052 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerName="dnsmasq-dns" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.131062 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerName="dnsmasq-dns" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.131308 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="901e1c94-d356-4ed1-998a-a7910a1e0510" containerName="dnsmasq-dns" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.132639 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.139465 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-rlxcs"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.145477 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.196465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.196545 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-config\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.196663 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.196708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5q4t\" (UniqueName: \"kubernetes.io/projected/e20ad847-d974-4004-bd8d-5a7652647a5b-kube-api-access-s5q4t\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.300426 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.300653 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5q4t\" (UniqueName: \"kubernetes.io/projected/e20ad847-d974-4004-bd8d-5a7652647a5b-kube-api-access-s5q4t\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.300703 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.300744 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-config\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.301635 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.303540 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.304440 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-config\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.305746 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-rlxcs"] Nov 28 21:09:33 crc kubenswrapper[4957]: E1128 21:09:33.306801 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s5q4t], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" podUID="e20ad847-d974-4004-bd8d-5a7652647a5b" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.354472 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5q4t\" (UniqueName: \"kubernetes.io/projected/e20ad847-d974-4004-bd8d-5a7652647a5b-kube-api-access-s5q4t\") pod \"dnsmasq-dns-7f896c8c65-rlxcs\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.354762 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2d7lb"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.356167 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.363664 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.376103 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2t46"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.377606 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.381330 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.389109 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2t46"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.416914 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2d7lb"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.456176 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.459752 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.465750 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.466202 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-49fk6" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.466591 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.476436 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.482510 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505290 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-config\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505345 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a91a39cf-2bad-48a1-9dc7-2309bc652725-ovs-rundir\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a39cf-2bad-48a1-9dc7-2309bc652725-config\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505553 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505624 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a39cf-2bad-48a1-9dc7-2309bc652725-combined-ca-bundle\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505641 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtjm\" (UniqueName: \"kubernetes.io/projected/a91a39cf-2bad-48a1-9dc7-2309bc652725-kube-api-access-njtjm\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505748 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a39cf-2bad-48a1-9dc7-2309bc652725-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505778 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49gb\" (UniqueName: \"kubernetes.io/projected/99612a8f-7441-4a48-8b5f-ab510f7371d8-kube-api-access-h49gb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505874 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a91a39cf-2bad-48a1-9dc7-2309bc652725-ovn-rundir\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505895 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.505922 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608050 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608094 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608129 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-config\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608303 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a91a39cf-2bad-48a1-9dc7-2309bc652725-ovs-rundir\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608453 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-scripts\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608603 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a91a39cf-2bad-48a1-9dc7-2309bc652725-ovs-rundir\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608624 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-config\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608701 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a39cf-2bad-48a1-9dc7-2309bc652725-config\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608732 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608813 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a39cf-2bad-48a1-9dc7-2309bc652725-combined-ca-bundle\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608840 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtjm\" (UniqueName: \"kubernetes.io/projected/a91a39cf-2bad-48a1-9dc7-2309bc652725-kube-api-access-njtjm\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608877 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608916 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg9jq\" (UniqueName: \"kubernetes.io/projected/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-kube-api-access-gg9jq\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.608919 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-config\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609065 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609169 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a39cf-2bad-48a1-9dc7-2309bc652725-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609219 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49gb\" (UniqueName: \"kubernetes.io/projected/99612a8f-7441-4a48-8b5f-ab510f7371d8-kube-api-access-h49gb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609373 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a91a39cf-2bad-48a1-9dc7-2309bc652725-ovn-rundir\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609412 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609450 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609478 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a91a39cf-2bad-48a1-9dc7-2309bc652725-ovn-rundir\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.609532 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.610139 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.610478 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.611024 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91a39cf-2bad-48a1-9dc7-2309bc652725-config\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.612989 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a91a39cf-2bad-48a1-9dc7-2309bc652725-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.613291 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91a39cf-2bad-48a1-9dc7-2309bc652725-combined-ca-bundle\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.625318 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtjm\" (UniqueName: \"kubernetes.io/projected/a91a39cf-2bad-48a1-9dc7-2309bc652725-kube-api-access-njtjm\") pod \"ovn-controller-metrics-2d7lb\" (UID: \"a91a39cf-2bad-48a1-9dc7-2309bc652725\") " pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.629890 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49gb\" (UniqueName: \"kubernetes.io/projected/99612a8f-7441-4a48-8b5f-ab510f7371d8-kube-api-access-h49gb\") pod \"dnsmasq-dns-86db49b7ff-d2t46\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711422 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711464 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711521 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-scripts\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711549 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-config\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711587 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711605 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg9jq\" (UniqueName: \"kubernetes.io/projected/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-kube-api-access-gg9jq\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.711631 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.712777 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.712913 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-scripts\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.712942 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-config\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.715007 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.716457 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.720969 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.733139 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg9jq\" (UniqueName: \"kubernetes.io/projected/69f9b12c-31d7-4df2-a4ec-5861c3ad3d76-kube-api-access-gg9jq\") pod \"ovn-northd-0\" (UID: \"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76\") " pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.745013 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2d7lb" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.762630 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.785230 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.901931 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9b91aacb-b300-41de-814e-26e73ac93c2e","Type":"ContainerStarted","Data":"4d1db41ea61833abf61133202fdb767119f3e5f5cdb920076bf5d507e1685cc6"} Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.912362 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.912816 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d97270c0-f75e-4695-87b5-2c7cfd08bf02","Type":"ContainerStarted","Data":"aca03ab8a0e22678a6af5748f2fd3f6b7d4d6753cf8e66678c984e1bd5d79f0e"} Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.931151 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.691676787 podStartE2EDuration="33.931132722s" podCreationTimestamp="2025-11-28 21:09:00 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.722647136 +0000 UTC m=+1195.191295045" lastFinishedPulling="2025-11-28 21:09:23.962103071 +0000 UTC m=+1203.430750980" observedRunningTime="2025-11-28 21:09:33.925335571 +0000 UTC m=+1213.393983480" watchObservedRunningTime="2025-11-28 21:09:33.931132722 +0000 UTC m=+1213.399780631" Nov 28 21:09:33 crc kubenswrapper[4957]: I1128 21:09:33.958969 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.018997 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5q4t\" (UniqueName: \"kubernetes.io/projected/e20ad847-d974-4004-bd8d-5a7652647a5b-kube-api-access-s5q4t\") pod \"e20ad847-d974-4004-bd8d-5a7652647a5b\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.019471 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-ovsdbserver-sb\") pod \"e20ad847-d974-4004-bd8d-5a7652647a5b\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.019526 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-dns-svc\") pod \"e20ad847-d974-4004-bd8d-5a7652647a5b\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.019609 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-config\") pod \"e20ad847-d974-4004-bd8d-5a7652647a5b\" (UID: \"e20ad847-d974-4004-bd8d-5a7652647a5b\") " Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.020324 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-config" (OuterVolumeSpecName: "config") pod "e20ad847-d974-4004-bd8d-5a7652647a5b" (UID: "e20ad847-d974-4004-bd8d-5a7652647a5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.020702 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e20ad847-d974-4004-bd8d-5a7652647a5b" (UID: "e20ad847-d974-4004-bd8d-5a7652647a5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.021118 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e20ad847-d974-4004-bd8d-5a7652647a5b" (UID: "e20ad847-d974-4004-bd8d-5a7652647a5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.025887 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.025914 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.025927 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e20ad847-d974-4004-bd8d-5a7652647a5b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.026430 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e20ad847-d974-4004-bd8d-5a7652647a5b-kube-api-access-s5q4t" (OuterVolumeSpecName: "kube-api-access-s5q4t") pod "e20ad847-d974-4004-bd8d-5a7652647a5b" (UID: "e20ad847-d974-4004-bd8d-5a7652647a5b"). InnerVolumeSpecName "kube-api-access-s5q4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.127353 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5q4t\" (UniqueName: \"kubernetes.io/projected/e20ad847-d974-4004-bd8d-5a7652647a5b-kube-api-access-s5q4t\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:34 crc kubenswrapper[4957]: E1128 21:09:34.179567 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cf3b067_6d8a_4d74_8c8c_285536f779e9.slice/crio-conmon-ccab034b7fe1b003bb22b8809867467c6fa2bdf31a6b078700099887371b30d7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cf3b067_6d8a_4d74_8c8c_285536f779e9.slice/crio-ccab034b7fe1b003bb22b8809867467c6fa2bdf31a6b078700099887371b30d7.scope\": RecentStats: unable to find data in memory cache]" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.263318 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.308807999 podStartE2EDuration="36.263298859s" podCreationTimestamp="2025-11-28 21:08:58 +0000 UTC" firstStartedPulling="2025-11-28 21:09:14.417325941 +0000 UTC m=+1193.885973860" lastFinishedPulling="2025-11-28 21:09:23.371816811 +0000 UTC m=+1202.840464720" observedRunningTime="2025-11-28 21:09:33.948715698 +0000 UTC m=+1213.417363607" watchObservedRunningTime="2025-11-28 21:09:34.263298859 +0000 UTC m=+1213.731946768" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.269004 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2d7lb"] Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.391700 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 21:09:34 crc kubenswrapper[4957]: W1128 21:09:34.398598 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69f9b12c_31d7_4df2_a4ec_5861c3ad3d76.slice/crio-2c809cb63b4c12135f4fd89f3006a5c766dc00b190271f09a5494fe76b9a1248 WatchSource:0}: Error finding container 2c809cb63b4c12135f4fd89f3006a5c766dc00b190271f09a5494fe76b9a1248: Status 404 returned error can't find the container with id 2c809cb63b4c12135f4fd89f3006a5c766dc00b190271f09a5494fe76b9a1248 Nov 28 21:09:34 crc kubenswrapper[4957]: W1128 21:09:34.402886 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99612a8f_7441_4a48_8b5f_ab510f7371d8.slice/crio-62327f0f888ea559540e9decbc3e95ddb71398d179e4db282d0991c9e6512dc8 WatchSource:0}: Error finding container 62327f0f888ea559540e9decbc3e95ddb71398d179e4db282d0991c9e6512dc8: Status 404 returned error can't find the container with id 62327f0f888ea559540e9decbc3e95ddb71398d179e4db282d0991c9e6512dc8 Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.423302 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2t46"] Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.463420 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2t46"] Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.488167 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hdgrw"] Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.501916 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hdgrw"] Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.501963 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.502308 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.547261 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-dns-svc\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.547336 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94dx\" (UniqueName: \"kubernetes.io/projected/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-kube-api-access-m94dx\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.547490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.547597 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-config\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.547860 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.650264 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.650561 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-dns-svc\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.650605 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94dx\" (UniqueName: \"kubernetes.io/projected/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-kube-api-access-m94dx\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.650656 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.650693 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-config\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.651109 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.651419 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-config\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.651975 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-dns-svc\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.653204 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.666962 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94dx\" (UniqueName: \"kubernetes.io/projected/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-kube-api-access-m94dx\") pod \"dnsmasq-dns-698758b865-hdgrw\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.847460 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.929075 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76","Type":"ContainerStarted","Data":"2c809cb63b4c12135f4fd89f3006a5c766dc00b190271f09a5494fe76b9a1248"} Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.933504 4957 generic.go:334] "Generic (PLEG): container finished" podID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerID="ccab034b7fe1b003bb22b8809867467c6fa2bdf31a6b078700099887371b30d7" exitCode=0 Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.933591 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerDied","Data":"ccab034b7fe1b003bb22b8809867467c6fa2bdf31a6b078700099887371b30d7"} Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.941558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" event={"ID":"99612a8f-7441-4a48-8b5f-ab510f7371d8","Type":"ContainerStarted","Data":"62327f0f888ea559540e9decbc3e95ddb71398d179e4db282d0991c9e6512dc8"} Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.944504 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2d7lb" event={"ID":"a91a39cf-2bad-48a1-9dc7-2309bc652725","Type":"ContainerStarted","Data":"6b1b63291d39ee1362b81926eae2a74b514b5b90d413f67dd4c444a21cbc6c6f"} Nov 28 21:09:34 crc kubenswrapper[4957]: I1128 21:09:34.944582 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-rlxcs" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.008849 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-rlxcs"] Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.016551 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-rlxcs"] Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.336092 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hdgrw"] Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.580609 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.588018 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.590049 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9lcns" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.590851 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.590922 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.594706 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.626518 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.668806 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.669062 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.669105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jrb\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-kube-api-access-g2jrb\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.669250 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-cache\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.669369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-lock\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.770533 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.770575 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2jrb\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-kube-api-access-g2jrb\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.770615 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-cache\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.770651 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-lock\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.770700 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: E1128 21:09:35.770709 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 21:09:35 crc kubenswrapper[4957]: E1128 21:09:35.770728 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 21:09:35 crc kubenswrapper[4957]: E1128 21:09:35.770771 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift podName:ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5 nodeName:}" failed. No retries permitted until 2025-11-28 21:09:36.270757498 +0000 UTC m=+1215.739405407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift") pod "swift-storage-0" (UID: "ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5") : configmap "swift-ring-files" not found Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.771130 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.771348 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-cache\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.771359 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-lock\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.790365 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2jrb\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-kube-api-access-g2jrb\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.798533 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:35 crc kubenswrapper[4957]: I1128 21:09:35.954500 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hdgrw" event={"ID":"e2f92833-2316-4e8c-ae77-e50ee7b07d4f","Type":"ContainerStarted","Data":"de5478ab264462123bfc8125b04057b07b43858297b3ff92b1569291fefd1e4a"} Nov 28 21:09:36 crc kubenswrapper[4957]: I1128 21:09:36.281095 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:36 crc kubenswrapper[4957]: E1128 21:09:36.281300 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 21:09:36 crc kubenswrapper[4957]: E1128 21:09:36.281530 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 21:09:36 crc kubenswrapper[4957]: E1128 21:09:36.281590 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift podName:ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5 nodeName:}" failed. No retries permitted until 2025-11-28 21:09:37.281573939 +0000 UTC m=+1216.750221848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift") pod "swift-storage-0" (UID: "ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5") : configmap "swift-ring-files" not found Nov 28 21:09:36 crc kubenswrapper[4957]: I1128 21:09:36.860086 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e20ad847-d974-4004-bd8d-5a7652647a5b" path="/var/lib/kubelet/pods/e20ad847-d974-4004-bd8d-5a7652647a5b/volumes" Nov 28 21:09:36 crc kubenswrapper[4957]: I1128 21:09:36.994069 4957 generic.go:334] "Generic (PLEG): container finished" podID="99612a8f-7441-4a48-8b5f-ab510f7371d8" containerID="746e51b8872b8e728aa4bbfeb81387a6b47a1c67f44d329b0a5299d1aa6a15fb" exitCode=0 Nov 28 21:09:36 crc kubenswrapper[4957]: I1128 21:09:36.994276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" event={"ID":"99612a8f-7441-4a48-8b5f-ab510f7371d8","Type":"ContainerDied","Data":"746e51b8872b8e728aa4bbfeb81387a6b47a1c67f44d329b0a5299d1aa6a15fb"} Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.015317 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2d7lb" event={"ID":"a91a39cf-2bad-48a1-9dc7-2309bc652725","Type":"ContainerStarted","Data":"401753b23c1dd34b21f84cd0ca08cd9e388633b79d4110dd83be6ffff8d8dc0c"} Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.022765 4957 generic.go:334] "Generic (PLEG): container finished" podID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerID="fb6361d1da4f2a42d13dc0e06707f7ca5123443b3ce0275e20363eedacdc5014" exitCode=0 Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.023129 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hdgrw" event={"ID":"e2f92833-2316-4e8c-ae77-e50ee7b07d4f","Type":"ContainerDied","Data":"fb6361d1da4f2a42d13dc0e06707f7ca5123443b3ce0275e20363eedacdc5014"} Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.043939 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2d7lb" podStartSLOduration=4.043924882 podStartE2EDuration="4.043924882s" podCreationTimestamp="2025-11-28 21:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:09:37.037496056 +0000 UTC m=+1216.506143965" watchObservedRunningTime="2025-11-28 21:09:37.043924882 +0000 UTC m=+1216.512572781" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.300471 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:37 crc kubenswrapper[4957]: E1128 21:09:37.300782 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 21:09:37 crc kubenswrapper[4957]: E1128 21:09:37.300806 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 21:09:37 crc kubenswrapper[4957]: E1128 21:09:37.300862 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift podName:ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5 nodeName:}" failed. No retries permitted until 2025-11-28 21:09:39.300841515 +0000 UTC m=+1218.769489424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift") pod "swift-storage-0" (UID: "ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5") : configmap "swift-ring-files" not found Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.416386 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.504270 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-config\") pod \"99612a8f-7441-4a48-8b5f-ab510f7371d8\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.504409 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-sb\") pod \"99612a8f-7441-4a48-8b5f-ab510f7371d8\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.504477 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h49gb\" (UniqueName: \"kubernetes.io/projected/99612a8f-7441-4a48-8b5f-ab510f7371d8-kube-api-access-h49gb\") pod \"99612a8f-7441-4a48-8b5f-ab510f7371d8\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.504534 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-nb\") pod \"99612a8f-7441-4a48-8b5f-ab510f7371d8\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.504583 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-dns-svc\") pod \"99612a8f-7441-4a48-8b5f-ab510f7371d8\" (UID: \"99612a8f-7441-4a48-8b5f-ab510f7371d8\") " Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.515970 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99612a8f-7441-4a48-8b5f-ab510f7371d8-kube-api-access-h49gb" (OuterVolumeSpecName: "kube-api-access-h49gb") pod "99612a8f-7441-4a48-8b5f-ab510f7371d8" (UID: "99612a8f-7441-4a48-8b5f-ab510f7371d8"). InnerVolumeSpecName "kube-api-access-h49gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.537232 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99612a8f-7441-4a48-8b5f-ab510f7371d8" (UID: "99612a8f-7441-4a48-8b5f-ab510f7371d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.542035 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-config" (OuterVolumeSpecName: "config") pod "99612a8f-7441-4a48-8b5f-ab510f7371d8" (UID: "99612a8f-7441-4a48-8b5f-ab510f7371d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.549880 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99612a8f-7441-4a48-8b5f-ab510f7371d8" (UID: "99612a8f-7441-4a48-8b5f-ab510f7371d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.553449 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99612a8f-7441-4a48-8b5f-ab510f7371d8" (UID: "99612a8f-7441-4a48-8b5f-ab510f7371d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.607628 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.607664 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.607677 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h49gb\" (UniqueName: \"kubernetes.io/projected/99612a8f-7441-4a48-8b5f-ab510f7371d8-kube-api-access-h49gb\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.607687 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:37 crc kubenswrapper[4957]: I1128 21:09:37.607696 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99612a8f-7441-4a48-8b5f-ab510f7371d8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.034161 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hdgrw" event={"ID":"e2f92833-2316-4e8c-ae77-e50ee7b07d4f","Type":"ContainerStarted","Data":"cc2641fa24f7103798753365129c09b6c64c000e64451f5099c5bd16a07fbc5d"} Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.037176 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.039279 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2t46" event={"ID":"99612a8f-7441-4a48-8b5f-ab510f7371d8","Type":"ContainerDied","Data":"62327f0f888ea559540e9decbc3e95ddb71398d179e4db282d0991c9e6512dc8"} Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.039337 4957 scope.go:117] "RemoveContainer" containerID="746e51b8872b8e728aa4bbfeb81387a6b47a1c67f44d329b0a5299d1aa6a15fb" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.050715 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76","Type":"ContainerStarted","Data":"4eb72d9d758e4ab21c2dc3547bea3335b1f7836e5b018b6050168ee3d4d4f581"} Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.050789 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.050803 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"69f9b12c-31d7-4df2-a4ec-5861c3ad3d76","Type":"ContainerStarted","Data":"7f6a4d257e2d7ca5483ac41ddf78b10597f6386d13e6153fadec91ab461791bb"} Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.071936 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hdgrw" podStartSLOduration=4.07191342 podStartE2EDuration="4.07191342s" podCreationTimestamp="2025-11-28 21:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:09:38.05627761 +0000 UTC m=+1217.524925519" watchObservedRunningTime="2025-11-28 21:09:38.07191342 +0000 UTC m=+1217.540561329" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.085750 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.8146890940000002 podStartE2EDuration="5.085723995s" podCreationTimestamp="2025-11-28 21:09:33 +0000 UTC" firstStartedPulling="2025-11-28 21:09:34.402956007 +0000 UTC m=+1213.871603916" lastFinishedPulling="2025-11-28 21:09:36.673990908 +0000 UTC m=+1216.142638817" observedRunningTime="2025-11-28 21:09:38.077334651 +0000 UTC m=+1217.545982560" watchObservedRunningTime="2025-11-28 21:09:38.085723995 +0000 UTC m=+1217.554371904" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.180111 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2t46"] Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.190911 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2t46"] Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.830575 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99612a8f-7441-4a48-8b5f-ab510f7371d8" path="/var/lib/kubelet/pods/99612a8f-7441-4a48-8b5f-ab510f7371d8/volumes" Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.993662 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:09:38 crc kubenswrapper[4957]: I1128 21:09:38.993727 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.059720 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.340582 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:39 crc kubenswrapper[4957]: E1128 21:09:39.340770 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 21:09:39 crc kubenswrapper[4957]: E1128 21:09:39.341236 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 21:09:39 crc kubenswrapper[4957]: E1128 21:09:39.341297 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift podName:ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5 nodeName:}" failed. No retries permitted until 2025-11-28 21:09:43.341277642 +0000 UTC m=+1222.809925551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift") pod "swift-storage-0" (UID: "ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5") : configmap "swift-ring-files" not found Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.538391 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bzbx8"] Nov 28 21:09:39 crc kubenswrapper[4957]: E1128 21:09:39.538995 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99612a8f-7441-4a48-8b5f-ab510f7371d8" containerName="init" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.539085 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99612a8f-7441-4a48-8b5f-ab510f7371d8" containerName="init" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.539438 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="99612a8f-7441-4a48-8b5f-ab510f7371d8" containerName="init" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.540185 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.542003 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.542272 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.542440 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.544369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-etc-swift\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.544551 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-ring-data-devices\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.544683 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-dispersionconf\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.544790 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-swiftconf\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.544914 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc7j\" (UniqueName: \"kubernetes.io/projected/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-kube-api-access-pvc7j\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.545046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-scripts\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.545225 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-combined-ca-bundle\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.552724 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bzbx8"] Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651389 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-combined-ca-bundle\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651547 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-etc-swift\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651587 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-ring-data-devices\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651634 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-dispersionconf\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651661 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-swiftconf\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651698 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvc7j\" (UniqueName: \"kubernetes.io/projected/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-kube-api-access-pvc7j\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.651726 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-scripts\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.652201 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-etc-swift\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.652556 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-scripts\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.652785 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-ring-data-devices\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.664465 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-dispersionconf\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.664945 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-combined-ca-bundle\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.667120 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-swiftconf\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.667536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvc7j\" (UniqueName: \"kubernetes.io/projected/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-kube-api-access-pvc7j\") pod \"swift-ring-rebalance-bzbx8\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:39 crc kubenswrapper[4957]: I1128 21:09:39.865463 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:40 crc kubenswrapper[4957]: I1128 21:09:40.212156 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 21:09:40 crc kubenswrapper[4957]: I1128 21:09:40.212494 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 21:09:40 crc kubenswrapper[4957]: I1128 21:09:40.303423 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.187505 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.740490 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-efd2-account-create-update-rxcxq"] Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.742127 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.745702 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.755603 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-efd2-account-create-update-rxcxq"] Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.767445 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fp7zh"] Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.768772 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.776976 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fp7zh"] Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.881319 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.881372 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.907097 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv2zn\" (UniqueName: \"kubernetes.io/projected/90bdf335-aae2-44c6-927e-f6d9225d5970-kube-api-access-vv2zn\") pod \"keystone-db-create-fp7zh\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.907163 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kn45\" (UniqueName: \"kubernetes.io/projected/2e9a05c8-f27b-4636-a835-ceb3a38cf708-kube-api-access-9kn45\") pod \"keystone-efd2-account-create-update-rxcxq\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.907510 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9a05c8-f27b-4636-a835-ceb3a38cf708-operator-scripts\") pod \"keystone-efd2-account-create-update-rxcxq\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.907565 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90bdf335-aae2-44c6-927e-f6d9225d5970-operator-scripts\") pod \"keystone-db-create-fp7zh\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.969458 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-njhfl"] Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.971180 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-njhfl" Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.981772 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a86c-account-create-update-q856p"] Nov 28 21:09:41 crc kubenswrapper[4957]: I1128 21:09:41.997671 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.004289 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.009309 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9a05c8-f27b-4636-a835-ceb3a38cf708-operator-scripts\") pod \"keystone-efd2-account-create-update-rxcxq\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.009371 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90bdf335-aae2-44c6-927e-f6d9225d5970-operator-scripts\") pod \"keystone-db-create-fp7zh\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.009447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2zn\" (UniqueName: \"kubernetes.io/projected/90bdf335-aae2-44c6-927e-f6d9225d5970-kube-api-access-vv2zn\") pod \"keystone-db-create-fp7zh\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.009472 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kn45\" (UniqueName: \"kubernetes.io/projected/2e9a05c8-f27b-4636-a835-ceb3a38cf708-kube-api-access-9kn45\") pod \"keystone-efd2-account-create-update-rxcxq\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.010425 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9a05c8-f27b-4636-a835-ceb3a38cf708-operator-scripts\") pod \"keystone-efd2-account-create-update-rxcxq\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.010573 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-njhfl"] Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.010845 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90bdf335-aae2-44c6-927e-f6d9225d5970-operator-scripts\") pod \"keystone-db-create-fp7zh\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.028116 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a86c-account-create-update-q856p"] Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.034066 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv2zn\" (UniqueName: \"kubernetes.io/projected/90bdf335-aae2-44c6-927e-f6d9225d5970-kube-api-access-vv2zn\") pod \"keystone-db-create-fp7zh\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.040136 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kn45\" (UniqueName: \"kubernetes.io/projected/2e9a05c8-f27b-4636-a835-ceb3a38cf708-kube-api-access-9kn45\") pod \"keystone-efd2-account-create-update-rxcxq\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.055779 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.068962 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.090695 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.111648 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbp85\" (UniqueName: \"kubernetes.io/projected/b166cebe-60ab-4a64-843d-7f4c0586c788-kube-api-access-xbp85\") pod \"placement-a86c-account-create-update-q856p\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.111717 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b166cebe-60ab-4a64-843d-7f4c0586c788-operator-scripts\") pod \"placement-a86c-account-create-update-q856p\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.111972 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z768\" (UniqueName: \"kubernetes.io/projected/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-kube-api-access-7z768\") pod \"placement-db-create-njhfl\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.111998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-operator-scripts\") pod \"placement-db-create-njhfl\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.151116 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fgcv6"] Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.152531 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.162684 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fgcv6"] Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.205313 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.214064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdf6984-caf0-430e-bb97-52e25019aa8f-operator-scripts\") pod \"glance-db-create-fgcv6\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.214255 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z768\" (UniqueName: \"kubernetes.io/projected/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-kube-api-access-7z768\") pod \"placement-db-create-njhfl\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.214287 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-operator-scripts\") pod \"placement-db-create-njhfl\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.214397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbp85\" (UniqueName: \"kubernetes.io/projected/b166cebe-60ab-4a64-843d-7f4c0586c788-kube-api-access-xbp85\") pod \"placement-a86c-account-create-update-q856p\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.214433 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b166cebe-60ab-4a64-843d-7f4c0586c788-operator-scripts\") pod \"placement-a86c-account-create-update-q856p\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.214497 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9bxq\" (UniqueName: \"kubernetes.io/projected/7bdf6984-caf0-430e-bb97-52e25019aa8f-kube-api-access-x9bxq\") pod \"glance-db-create-fgcv6\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.215073 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-operator-scripts\") pod \"placement-db-create-njhfl\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.215239 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b166cebe-60ab-4a64-843d-7f4c0586c788-operator-scripts\") pod \"placement-a86c-account-create-update-q856p\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.242860 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z768\" (UniqueName: \"kubernetes.io/projected/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-kube-api-access-7z768\") pod \"placement-db-create-njhfl\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.244582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbp85\" (UniqueName: \"kubernetes.io/projected/b166cebe-60ab-4a64-843d-7f4c0586c788-kube-api-access-xbp85\") pod \"placement-a86c-account-create-update-q856p\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.245984 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3780-account-create-update-nzcdq"] Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.247862 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.250921 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.265162 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3780-account-create-update-nzcdq"] Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.298436 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-njhfl" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.320728 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9bxq\" (UniqueName: \"kubernetes.io/projected/7bdf6984-caf0-430e-bb97-52e25019aa8f-kube-api-access-x9bxq\") pod \"glance-db-create-fgcv6\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.320808 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdf6984-caf0-430e-bb97-52e25019aa8f-operator-scripts\") pod \"glance-db-create-fgcv6\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.320847 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw2vq\" (UniqueName: \"kubernetes.io/projected/d40378c2-5f58-4240-8b46-9b8503574a70-kube-api-access-zw2vq\") pod \"glance-3780-account-create-update-nzcdq\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.320882 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40378c2-5f58-4240-8b46-9b8503574a70-operator-scripts\") pod \"glance-3780-account-create-update-nzcdq\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.321979 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdf6984-caf0-430e-bb97-52e25019aa8f-operator-scripts\") pod \"glance-db-create-fgcv6\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.330526 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.352867 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9bxq\" (UniqueName: \"kubernetes.io/projected/7bdf6984-caf0-430e-bb97-52e25019aa8f-kube-api-access-x9bxq\") pod \"glance-db-create-fgcv6\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.423098 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw2vq\" (UniqueName: \"kubernetes.io/projected/d40378c2-5f58-4240-8b46-9b8503574a70-kube-api-access-zw2vq\") pod \"glance-3780-account-create-update-nzcdq\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.426876 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40378c2-5f58-4240-8b46-9b8503574a70-operator-scripts\") pod \"glance-3780-account-create-update-nzcdq\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.429936 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40378c2-5f58-4240-8b46-9b8503574a70-operator-scripts\") pod \"glance-3780-account-create-update-nzcdq\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.439081 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw2vq\" (UniqueName: \"kubernetes.io/projected/d40378c2-5f58-4240-8b46-9b8503574a70-kube-api-access-zw2vq\") pod \"glance-3780-account-create-update-nzcdq\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.474354 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:42 crc kubenswrapper[4957]: I1128 21:09:42.700660 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:43 crc kubenswrapper[4957]: I1128 21:09:43.352821 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:43 crc kubenswrapper[4957]: E1128 21:09:43.352988 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 21:09:43 crc kubenswrapper[4957]: E1128 21:09:43.353160 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 21:09:43 crc kubenswrapper[4957]: E1128 21:09:43.353230 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift podName:ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5 nodeName:}" failed. No retries permitted until 2025-11-28 21:09:51.353198064 +0000 UTC m=+1230.821845973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift") pod "swift-storage-0" (UID: "ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5") : configmap "swift-ring-files" not found Nov 28 21:09:43 crc kubenswrapper[4957]: I1128 21:09:43.375390 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bzbx8"] Nov 28 21:09:43 crc kubenswrapper[4957]: I1128 21:09:43.386903 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fp7zh"] Nov 28 21:09:43 crc kubenswrapper[4957]: I1128 21:09:43.526998 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a86c-account-create-update-q856p"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.320292 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8642b"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.322134 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.364087 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8642b"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.388449 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdr4\" (UniqueName: \"kubernetes.io/projected/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-kube-api-access-rrdr4\") pod \"mysqld-exporter-openstack-db-create-8642b\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.388844 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-8642b\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.426387 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-e779-account-create-update-kqnth"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.428063 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.433125 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.444989 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e779-account-create-update-kqnth"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.491625 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-8642b\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.491679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7330b121-68e0-43b8-ba88-a8c6f08c878b-operator-scripts\") pod \"mysqld-exporter-e779-account-create-update-kqnth\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.491708 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2prw\" (UniqueName: \"kubernetes.io/projected/7330b121-68e0-43b8-ba88-a8c6f08c878b-kube-api-access-j2prw\") pod \"mysqld-exporter-e779-account-create-update-kqnth\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.492242 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrdr4\" (UniqueName: \"kubernetes.io/projected/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-kube-api-access-rrdr4\") pod \"mysqld-exporter-openstack-db-create-8642b\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.492734 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-8642b\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.510728 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrdr4\" (UniqueName: \"kubernetes.io/projected/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-kube-api-access-rrdr4\") pod \"mysqld-exporter-openstack-db-create-8642b\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.593964 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7330b121-68e0-43b8-ba88-a8c6f08c878b-operator-scripts\") pod \"mysqld-exporter-e779-account-create-update-kqnth\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.594029 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2prw\" (UniqueName: \"kubernetes.io/projected/7330b121-68e0-43b8-ba88-a8c6f08c878b-kube-api-access-j2prw\") pod \"mysqld-exporter-e779-account-create-update-kqnth\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.594998 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7330b121-68e0-43b8-ba88-a8c6f08c878b-operator-scripts\") pod \"mysqld-exporter-e779-account-create-update-kqnth\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.612806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2prw\" (UniqueName: \"kubernetes.io/projected/7330b121-68e0-43b8-ba88-a8c6f08c878b-kube-api-access-j2prw\") pod \"mysqld-exporter-e779-account-create-update-kqnth\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.720471 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3780-account-create-update-nzcdq"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.761999 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.851306 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-efd2-account-create-update-rxcxq"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.851364 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.892229 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.900483 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-njhfl"] Nov 28 21:09:44 crc kubenswrapper[4957]: I1128 21:09:44.966047 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fgcv6"] Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.043176 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qwwv6"] Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.043703 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" containerName="dnsmasq-dns" containerID="cri-o://64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70" gracePeriod=10 Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.200878 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a86c-account-create-update-q856p" event={"ID":"b166cebe-60ab-4a64-843d-7f4c0586c788","Type":"ContainerStarted","Data":"7a5902b9af319542c0c399752b44cfee09f0d9d8579aaa7bc95be2d84338a231"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.200927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a86c-account-create-update-q856p" event={"ID":"b166cebe-60ab-4a64-843d-7f4c0586c788","Type":"ContainerStarted","Data":"c1cf8719a53d868f94b7a0f7bf172627027f0ad9b593a94eec4197c1b8aa1085"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.213820 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerStarted","Data":"9009ab1d771da34161caf020be8928d10b0cbd46cb404d4e2fd1a19b0dcf018a"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.222340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bzbx8" event={"ID":"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7","Type":"ContainerStarted","Data":"a692e1a8e70bae249ba773538f4ccf4d944e9f48abe5e420f3dbc8fd860ff11c"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.228388 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3780-account-create-update-nzcdq" event={"ID":"d40378c2-5f58-4240-8b46-9b8503574a70","Type":"ContainerStarted","Data":"ec15be84652fab2b6e1ee21c173cd8e4420571821d459eaefad3bc922dd19b74"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.228424 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3780-account-create-update-nzcdq" event={"ID":"d40378c2-5f58-4240-8b46-9b8503574a70","Type":"ContainerStarted","Data":"6c5fb8cce349106ba4c2fdcebaed57fb169a863f33039a1e7c83c75fedb40784"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.229583 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a86c-account-create-update-q856p" podStartSLOduration=4.229559842 podStartE2EDuration="4.229559842s" podCreationTimestamp="2025-11-28 21:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:09:45.219918598 +0000 UTC m=+1224.688566507" watchObservedRunningTime="2025-11-28 21:09:45.229559842 +0000 UTC m=+1224.698207751" Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.254785 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-efd2-account-create-update-rxcxq" event={"ID":"2e9a05c8-f27b-4636-a835-ceb3a38cf708","Type":"ContainerStarted","Data":"9eb4998602978afc0204abf90dd8ff3024f99b2df489ff72fde495b0a72ed4c0"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.255494 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3780-account-create-update-nzcdq" podStartSLOduration=3.25547453 podStartE2EDuration="3.25547453s" podCreationTimestamp="2025-11-28 21:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:09:45.247913627 +0000 UTC m=+1224.716561536" watchObservedRunningTime="2025-11-28 21:09:45.25547453 +0000 UTC m=+1224.724122439" Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.259988 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fgcv6" event={"ID":"7bdf6984-caf0-430e-bb97-52e25019aa8f","Type":"ContainerStarted","Data":"f2bceafbeca9529bc0a824a3c0784d19ec6d54d670c7447ae614dbd31d622e2a"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.264285 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fp7zh" event={"ID":"90bdf335-aae2-44c6-927e-f6d9225d5970","Type":"ContainerStarted","Data":"4ceca11700f34de2edb0bf530cef8577435fb04033483cc92bad2efb34bf5783"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.264338 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fp7zh" event={"ID":"90bdf335-aae2-44c6-927e-f6d9225d5970","Type":"ContainerStarted","Data":"1ffe4066e4292ebb83b7911745b21705957bdb4eb33bdf5dbfbff57aecc040e5"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.267974 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-njhfl" event={"ID":"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc","Type":"ContainerStarted","Data":"edaf002e17d784f7d9b91116735518af1ec309191322ab6b920f0eda52436341"} Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.378872 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8642b"] Nov 28 21:09:45 crc kubenswrapper[4957]: W1128 21:09:45.420405 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4a6f3c4_1c3e_459e_b7d7_9ec0fe0e1337.slice/crio-15485a391d761d39d2420cabe15c5206881c643738569b1c19f1225b04909ba5 WatchSource:0}: Error finding container 15485a391d761d39d2420cabe15c5206881c643738569b1c19f1225b04909ba5: Status 404 returned error can't find the container with id 15485a391d761d39d2420cabe15c5206881c643738569b1c19f1225b04909ba5 Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.672861 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e779-account-create-update-kqnth"] Nov 28 21:09:45 crc kubenswrapper[4957]: W1128 21:09:45.685541 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7330b121_68e0_43b8_ba88_a8c6f08c878b.slice/crio-60fc7f7e69467874b956c6d9169f0c0165b9d30db0bfe79e7cd990a37a6a210f WatchSource:0}: Error finding container 60fc7f7e69467874b956c6d9169f0c0165b9d30db0bfe79e7cd990a37a6a210f: Status 404 returned error can't find the container with id 60fc7f7e69467874b956c6d9169f0c0165b9d30db0bfe79e7cd990a37a6a210f Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.896498 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.933889 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-config\") pod \"1b34f895-848b-4d42-bacc-04dd981362c9\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.934490 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wc7d\" (UniqueName: \"kubernetes.io/projected/1b34f895-848b-4d42-bacc-04dd981362c9-kube-api-access-2wc7d\") pod \"1b34f895-848b-4d42-bacc-04dd981362c9\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.934665 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-dns-svc\") pod \"1b34f895-848b-4d42-bacc-04dd981362c9\" (UID: \"1b34f895-848b-4d42-bacc-04dd981362c9\") " Nov 28 21:09:45 crc kubenswrapper[4957]: I1128 21:09:45.953632 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b34f895-848b-4d42-bacc-04dd981362c9-kube-api-access-2wc7d" (OuterVolumeSpecName: "kube-api-access-2wc7d") pod "1b34f895-848b-4d42-bacc-04dd981362c9" (UID: "1b34f895-848b-4d42-bacc-04dd981362c9"). InnerVolumeSpecName "kube-api-access-2wc7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.018585 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b34f895-848b-4d42-bacc-04dd981362c9" (UID: "1b34f895-848b-4d42-bacc-04dd981362c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.028198 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-config" (OuterVolumeSpecName: "config") pod "1b34f895-848b-4d42-bacc-04dd981362c9" (UID: "1b34f895-848b-4d42-bacc-04dd981362c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.038263 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.038293 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wc7d\" (UniqueName: \"kubernetes.io/projected/1b34f895-848b-4d42-bacc-04dd981362c9-kube-api-access-2wc7d\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.038306 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b34f895-848b-4d42-bacc-04dd981362c9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.290290 4957 generic.go:334] "Generic (PLEG): container finished" podID="b166cebe-60ab-4a64-843d-7f4c0586c788" containerID="7a5902b9af319542c0c399752b44cfee09f0d9d8579aaa7bc95be2d84338a231" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.290362 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a86c-account-create-update-q856p" event={"ID":"b166cebe-60ab-4a64-843d-7f4c0586c788","Type":"ContainerDied","Data":"7a5902b9af319542c0c399752b44cfee09f0d9d8579aaa7bc95be2d84338a231"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.293614 4957 generic.go:334] "Generic (PLEG): container finished" podID="d40378c2-5f58-4240-8b46-9b8503574a70" containerID="ec15be84652fab2b6e1ee21c173cd8e4420571821d459eaefad3bc922dd19b74" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.293692 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3780-account-create-update-nzcdq" event={"ID":"d40378c2-5f58-4240-8b46-9b8503574a70","Type":"ContainerDied","Data":"ec15be84652fab2b6e1ee21c173cd8e4420571821d459eaefad3bc922dd19b74"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.295563 4957 generic.go:334] "Generic (PLEG): container finished" podID="7bdf6984-caf0-430e-bb97-52e25019aa8f" containerID="8026bfd48571ee9089a35f3dc2e1b5f4a90c9c75f117e08d3290e854d00201d0" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.295616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fgcv6" event={"ID":"7bdf6984-caf0-430e-bb97-52e25019aa8f","Type":"ContainerDied","Data":"8026bfd48571ee9089a35f3dc2e1b5f4a90c9c75f117e08d3290e854d00201d0"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.297442 4957 generic.go:334] "Generic (PLEG): container finished" podID="90bdf335-aae2-44c6-927e-f6d9225d5970" containerID="4ceca11700f34de2edb0bf530cef8577435fb04033483cc92bad2efb34bf5783" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.297483 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fp7zh" event={"ID":"90bdf335-aae2-44c6-927e-f6d9225d5970","Type":"ContainerDied","Data":"4ceca11700f34de2edb0bf530cef8577435fb04033483cc92bad2efb34bf5783"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.300189 4957 generic.go:334] "Generic (PLEG): container finished" podID="1b34f895-848b-4d42-bacc-04dd981362c9" containerID="64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.300302 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.300303 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" event={"ID":"1b34f895-848b-4d42-bacc-04dd981362c9","Type":"ContainerDied","Data":"64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.301824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qwwv6" event={"ID":"1b34f895-848b-4d42-bacc-04dd981362c9","Type":"ContainerDied","Data":"bc575d718860854d1fd05102374791d0f72f462938f5a5bbbb0540fd9e7eb774"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.301852 4957 scope.go:117] "RemoveContainer" containerID="64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.305471 4957 generic.go:334] "Generic (PLEG): container finished" podID="2e9a05c8-f27b-4636-a835-ceb3a38cf708" containerID="9f9794d91df57d2a0f5a03144d8d4fdf7ab07730bffbb5b76e1a431c7910f799" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.305536 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-efd2-account-create-update-rxcxq" event={"ID":"2e9a05c8-f27b-4636-a835-ceb3a38cf708","Type":"ContainerDied","Data":"9f9794d91df57d2a0f5a03144d8d4fdf7ab07730bffbb5b76e1a431c7910f799"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.316153 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" event={"ID":"7330b121-68e0-43b8-ba88-a8c6f08c878b","Type":"ContainerStarted","Data":"36c7c81f241a776668d44474fcff4d14d1a6dd199236b784cb68c5fa39aababd"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.316194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" event={"ID":"7330b121-68e0-43b8-ba88-a8c6f08c878b","Type":"ContainerStarted","Data":"60fc7f7e69467874b956c6d9169f0c0165b9d30db0bfe79e7cd990a37a6a210f"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.319383 4957 generic.go:334] "Generic (PLEG): container finished" podID="a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" containerID="131a055e9c035ea96eed3a1a0bcbb4a70b6c16ce5b2adfab6adc9f18bcbe2be9" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.319439 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-8642b" event={"ID":"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337","Type":"ContainerDied","Data":"131a055e9c035ea96eed3a1a0bcbb4a70b6c16ce5b2adfab6adc9f18bcbe2be9"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.319463 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-8642b" event={"ID":"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337","Type":"ContainerStarted","Data":"15485a391d761d39d2420cabe15c5206881c643738569b1c19f1225b04909ba5"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.326890 4957 generic.go:334] "Generic (PLEG): container finished" podID="d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" containerID="af4f236da06302b031f51fab43a9f8d7c3f62acd97038edae8ad0cbd694171af" exitCode=0 Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.326934 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-njhfl" event={"ID":"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc","Type":"ContainerDied","Data":"af4f236da06302b031f51fab43a9f8d7c3f62acd97038edae8ad0cbd694171af"} Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.410087 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qwwv6"] Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.430519 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qwwv6"] Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.438245 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" podStartSLOduration=2.438227072 podStartE2EDuration="2.438227072s" podCreationTimestamp="2025-11-28 21:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:09:46.415948861 +0000 UTC m=+1225.884596770" watchObservedRunningTime="2025-11-28 21:09:46.438227072 +0000 UTC m=+1225.906874981" Nov 28 21:09:46 crc kubenswrapper[4957]: I1128 21:09:46.822858 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" path="/var/lib/kubelet/pods/1b34f895-848b-4d42-bacc-04dd981362c9/volumes" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.245037 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.341255 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fp7zh" event={"ID":"90bdf335-aae2-44c6-927e-f6d9225d5970","Type":"ContainerDied","Data":"1ffe4066e4292ebb83b7911745b21705957bdb4eb33bdf5dbfbff57aecc040e5"} Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.341323 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fp7zh" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.341329 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ffe4066e4292ebb83b7911745b21705957bdb4eb33bdf5dbfbff57aecc040e5" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.344866 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerStarted","Data":"a980127708b819d7861bca333b7a086aa7d4d44fbe558acb2dc20379bb5ffe60"} Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.346780 4957 generic.go:334] "Generic (PLEG): container finished" podID="7330b121-68e0-43b8-ba88-a8c6f08c878b" containerID="36c7c81f241a776668d44474fcff4d14d1a6dd199236b784cb68c5fa39aababd" exitCode=0 Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.346881 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" event={"ID":"7330b121-68e0-43b8-ba88-a8c6f08c878b","Type":"ContainerDied","Data":"36c7c81f241a776668d44474fcff4d14d1a6dd199236b784cb68c5fa39aababd"} Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.364891 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90bdf335-aae2-44c6-927e-f6d9225d5970-operator-scripts\") pod \"90bdf335-aae2-44c6-927e-f6d9225d5970\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.365131 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv2zn\" (UniqueName: \"kubernetes.io/projected/90bdf335-aae2-44c6-927e-f6d9225d5970-kube-api-access-vv2zn\") pod \"90bdf335-aae2-44c6-927e-f6d9225d5970\" (UID: \"90bdf335-aae2-44c6-927e-f6d9225d5970\") " Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.365807 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90bdf335-aae2-44c6-927e-f6d9225d5970-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90bdf335-aae2-44c6-927e-f6d9225d5970" (UID: "90bdf335-aae2-44c6-927e-f6d9225d5970"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.366309 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90bdf335-aae2-44c6-927e-f6d9225d5970-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.394412 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90bdf335-aae2-44c6-927e-f6d9225d5970-kube-api-access-vv2zn" (OuterVolumeSpecName: "kube-api-access-vv2zn") pod "90bdf335-aae2-44c6-927e-f6d9225d5970" (UID: "90bdf335-aae2-44c6-927e-f6d9225d5970"). InnerVolumeSpecName "kube-api-access-vv2zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:47 crc kubenswrapper[4957]: I1128 21:09:47.468828 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv2zn\" (UniqueName: \"kubernetes.io/projected/90bdf335-aae2-44c6-927e-f6d9225d5970-kube-api-access-vv2zn\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:48 crc kubenswrapper[4957]: E1128 21:09:48.239518 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752b9e43_44cd_4526_8393_6ae735497707.slice/crio-dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8.scope\": RecentStats: unable to find data in memory cache]" Nov 28 21:09:48 crc kubenswrapper[4957]: E1128 21:09:48.239540 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752b9e43_44cd_4526_8393_6ae735497707.slice/crio-conmon-dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752b9e43_44cd_4526_8393_6ae735497707.slice/crio-dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8.scope\": RecentStats: unable to find data in memory cache]" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.358386 4957 generic.go:334] "Generic (PLEG): container finished" podID="396562bc-990c-4874-894c-e553f8b3dae7" containerID="effcb0c3cd0c8a3dfd159e80c73618f8f4a23a27ca559dd7532e8f835c678840" exitCode=0 Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.358449 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396562bc-990c-4874-894c-e553f8b3dae7","Type":"ContainerDied","Data":"effcb0c3cd0c8a3dfd159e80c73618f8f4a23a27ca559dd7532e8f835c678840"} Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.362031 4957 generic.go:334] "Generic (PLEG): container finished" podID="752b9e43-44cd-4526-8393-6ae735497707" containerID="dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8" exitCode=0 Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.362185 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"752b9e43-44cd-4526-8393-6ae735497707","Type":"ContainerDied","Data":"dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8"} Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.483770 4957 scope.go:117] "RemoveContainer" containerID="ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.739500 4957 scope.go:117] "RemoveContainer" containerID="64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70" Nov 28 21:09:48 crc kubenswrapper[4957]: E1128 21:09:48.740302 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70\": container with ID starting with 64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70 not found: ID does not exist" containerID="64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.740362 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70"} err="failed to get container status \"64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70\": rpc error: code = NotFound desc = could not find container \"64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70\": container with ID starting with 64e1a9fcf46d08124dbfd72a7e3670832139eeca37e69cfeebd0fab554320b70 not found: ID does not exist" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.740402 4957 scope.go:117] "RemoveContainer" containerID="ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76" Nov 28 21:09:48 crc kubenswrapper[4957]: E1128 21:09:48.740866 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76\": container with ID starting with ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76 not found: ID does not exist" containerID="ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.740896 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76"} err="failed to get container status \"ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76\": rpc error: code = NotFound desc = could not find container \"ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76\": container with ID starting with ae78851ef1d25302c6466b14d3f3da9743e02ddb91528ede13da2b73710eee76 not found: ID does not exist" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.769805 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.814115 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-njhfl" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.826921 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.840075 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.857684 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.870575 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.870792 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.889534 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.897518 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kn45\" (UniqueName: \"kubernetes.io/projected/2e9a05c8-f27b-4636-a835-ceb3a38cf708-kube-api-access-9kn45\") pod \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.898406 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9a05c8-f27b-4636-a835-ceb3a38cf708-operator-scripts\") pod \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\" (UID: \"2e9a05c8-f27b-4636-a835-ceb3a38cf708\") " Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.900518 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e9a05c8-f27b-4636-a835-ceb3a38cf708-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e9a05c8-f27b-4636-a835-ceb3a38cf708" (UID: "2e9a05c8-f27b-4636-a835-ceb3a38cf708"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:48 crc kubenswrapper[4957]: I1128 21:09:48.910448 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9a05c8-f27b-4636-a835-ceb3a38cf708-kube-api-access-9kn45" (OuterVolumeSpecName: "kube-api-access-9kn45") pod "2e9a05c8-f27b-4636-a835-ceb3a38cf708" (UID: "2e9a05c8-f27b-4636-a835-ceb3a38cf708"). InnerVolumeSpecName "kube-api-access-9kn45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.003967 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdf6984-caf0-430e-bb97-52e25019aa8f-operator-scripts\") pod \"7bdf6984-caf0-430e-bb97-52e25019aa8f\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004078 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbp85\" (UniqueName: \"kubernetes.io/projected/b166cebe-60ab-4a64-843d-7f4c0586c788-kube-api-access-xbp85\") pod \"b166cebe-60ab-4a64-843d-7f4c0586c788\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004109 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2prw\" (UniqueName: \"kubernetes.io/projected/7330b121-68e0-43b8-ba88-a8c6f08c878b-kube-api-access-j2prw\") pod \"7330b121-68e0-43b8-ba88-a8c6f08c878b\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004154 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-operator-scripts\") pod \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004183 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40378c2-5f58-4240-8b46-9b8503574a70-operator-scripts\") pod \"d40378c2-5f58-4240-8b46-9b8503574a70\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004222 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7330b121-68e0-43b8-ba88-a8c6f08c878b-operator-scripts\") pod \"7330b121-68e0-43b8-ba88-a8c6f08c878b\" (UID: \"7330b121-68e0-43b8-ba88-a8c6f08c878b\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004263 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrdr4\" (UniqueName: \"kubernetes.io/projected/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-kube-api-access-rrdr4\") pod \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\" (UID: \"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004318 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b166cebe-60ab-4a64-843d-7f4c0586c788-operator-scripts\") pod \"b166cebe-60ab-4a64-843d-7f4c0586c788\" (UID: \"b166cebe-60ab-4a64-843d-7f4c0586c788\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004343 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-operator-scripts\") pod \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004419 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z768\" (UniqueName: \"kubernetes.io/projected/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-kube-api-access-7z768\") pod \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\" (UID: \"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004455 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw2vq\" (UniqueName: \"kubernetes.io/projected/d40378c2-5f58-4240-8b46-9b8503574a70-kube-api-access-zw2vq\") pod \"d40378c2-5f58-4240-8b46-9b8503574a70\" (UID: \"d40378c2-5f58-4240-8b46-9b8503574a70\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.004481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9bxq\" (UniqueName: \"kubernetes.io/projected/7bdf6984-caf0-430e-bb97-52e25019aa8f-kube-api-access-x9bxq\") pod \"7bdf6984-caf0-430e-bb97-52e25019aa8f\" (UID: \"7bdf6984-caf0-430e-bb97-52e25019aa8f\") " Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.005410 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kn45\" (UniqueName: \"kubernetes.io/projected/2e9a05c8-f27b-4636-a835-ceb3a38cf708-kube-api-access-9kn45\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.005434 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9a05c8-f27b-4636-a835-ceb3a38cf708-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.006317 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" (UID: "a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.006455 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bdf6984-caf0-430e-bb97-52e25019aa8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bdf6984-caf0-430e-bb97-52e25019aa8f" (UID: "7bdf6984-caf0-430e-bb97-52e25019aa8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.006711 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" (UID: "d9abc1a5-b97a-40ff-a510-ba9c9a5deabc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.007100 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b166cebe-60ab-4a64-843d-7f4c0586c788-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b166cebe-60ab-4a64-843d-7f4c0586c788" (UID: "b166cebe-60ab-4a64-843d-7f4c0586c788"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.007106 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40378c2-5f58-4240-8b46-9b8503574a70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d40378c2-5f58-4240-8b46-9b8503574a70" (UID: "d40378c2-5f58-4240-8b46-9b8503574a70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.008432 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7330b121-68e0-43b8-ba88-a8c6f08c878b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7330b121-68e0-43b8-ba88-a8c6f08c878b" (UID: "7330b121-68e0-43b8-ba88-a8c6f08c878b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.009052 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7330b121-68e0-43b8-ba88-a8c6f08c878b-kube-api-access-j2prw" (OuterVolumeSpecName: "kube-api-access-j2prw") pod "7330b121-68e0-43b8-ba88-a8c6f08c878b" (UID: "7330b121-68e0-43b8-ba88-a8c6f08c878b"). InnerVolumeSpecName "kube-api-access-j2prw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.010184 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b166cebe-60ab-4a64-843d-7f4c0586c788-kube-api-access-xbp85" (OuterVolumeSpecName: "kube-api-access-xbp85") pod "b166cebe-60ab-4a64-843d-7f4c0586c788" (UID: "b166cebe-60ab-4a64-843d-7f4c0586c788"). InnerVolumeSpecName "kube-api-access-xbp85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.011005 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-kube-api-access-rrdr4" (OuterVolumeSpecName: "kube-api-access-rrdr4") pod "a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" (UID: "a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337"). InnerVolumeSpecName "kube-api-access-rrdr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.012664 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-kube-api-access-7z768" (OuterVolumeSpecName: "kube-api-access-7z768") pod "d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" (UID: "d9abc1a5-b97a-40ff-a510-ba9c9a5deabc"). InnerVolumeSpecName "kube-api-access-7z768". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.018571 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40378c2-5f58-4240-8b46-9b8503574a70-kube-api-access-zw2vq" (OuterVolumeSpecName: "kube-api-access-zw2vq") pod "d40378c2-5f58-4240-8b46-9b8503574a70" (UID: "d40378c2-5f58-4240-8b46-9b8503574a70"). InnerVolumeSpecName "kube-api-access-zw2vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.027575 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdf6984-caf0-430e-bb97-52e25019aa8f-kube-api-access-x9bxq" (OuterVolumeSpecName: "kube-api-access-x9bxq") pod "7bdf6984-caf0-430e-bb97-52e25019aa8f" (UID: "7bdf6984-caf0-430e-bb97-52e25019aa8f"). InnerVolumeSpecName "kube-api-access-x9bxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107045 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b166cebe-60ab-4a64-843d-7f4c0586c788-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107080 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107089 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z768\" (UniqueName: \"kubernetes.io/projected/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc-kube-api-access-7z768\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107100 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw2vq\" (UniqueName: \"kubernetes.io/projected/d40378c2-5f58-4240-8b46-9b8503574a70-kube-api-access-zw2vq\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107112 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9bxq\" (UniqueName: \"kubernetes.io/projected/7bdf6984-caf0-430e-bb97-52e25019aa8f-kube-api-access-x9bxq\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107121 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdf6984-caf0-430e-bb97-52e25019aa8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107129 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbp85\" (UniqueName: \"kubernetes.io/projected/b166cebe-60ab-4a64-843d-7f4c0586c788-kube-api-access-xbp85\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107137 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2prw\" (UniqueName: \"kubernetes.io/projected/7330b121-68e0-43b8-ba88-a8c6f08c878b-kube-api-access-j2prw\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107145 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107153 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40378c2-5f58-4240-8b46-9b8503574a70-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107160 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7330b121-68e0-43b8-ba88-a8c6f08c878b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.107168 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrdr4\" (UniqueName: \"kubernetes.io/projected/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337-kube-api-access-rrdr4\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.380008 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8642b" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.380154 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-8642b" event={"ID":"a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337","Type":"ContainerDied","Data":"15485a391d761d39d2420cabe15c5206881c643738569b1c19f1225b04909ba5"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.381663 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15485a391d761d39d2420cabe15c5206881c643738569b1c19f1225b04909ba5" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.388930 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a86c-account-create-update-q856p" event={"ID":"b166cebe-60ab-4a64-843d-7f4c0586c788","Type":"ContainerDied","Data":"c1cf8719a53d868f94b7a0f7bf172627027f0ad9b593a94eec4197c1b8aa1085"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.388969 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cf8719a53d868f94b7a0f7bf172627027f0ad9b593a94eec4197c1b8aa1085" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.389041 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a86c-account-create-update-q856p" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.390964 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-efd2-account-create-update-rxcxq" event={"ID":"2e9a05c8-f27b-4636-a835-ceb3a38cf708","Type":"ContainerDied","Data":"9eb4998602978afc0204abf90dd8ff3024f99b2df489ff72fde495b0a72ed4c0"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.391009 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb4998602978afc0204abf90dd8ff3024f99b2df489ff72fde495b0a72ed4c0" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.391061 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-rxcxq" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.394710 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" event={"ID":"7330b121-68e0-43b8-ba88-a8c6f08c878b","Type":"ContainerDied","Data":"60fc7f7e69467874b956c6d9169f0c0165b9d30db0bfe79e7cd990a37a6a210f"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.394748 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fc7f7e69467874b956c6d9169f0c0165b9d30db0bfe79e7cd990a37a6a210f" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.394831 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e779-account-create-update-kqnth" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.412201 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396562bc-990c-4874-894c-e553f8b3dae7","Type":"ContainerStarted","Data":"129144dcbc530850a176fbfb46943deab971ce1b88efa897a9f9406693c41cb3"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.412478 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.420707 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-njhfl" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.423268 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-njhfl" event={"ID":"d9abc1a5-b97a-40ff-a510-ba9c9a5deabc","Type":"ContainerDied","Data":"edaf002e17d784f7d9b91116735518af1ec309191322ab6b920f0eda52436341"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.423313 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edaf002e17d784f7d9b91116735518af1ec309191322ab6b920f0eda52436341" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.430028 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"752b9e43-44cd-4526-8393-6ae735497707","Type":"ContainerStarted","Data":"d5952c73bcd1d128b1c2ae5639b7094369a74077616a79fa343219238524b3d1"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.430936 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.439558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bzbx8" event={"ID":"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7","Type":"ContainerStarted","Data":"1f1bbc4f03e63c02ca55a0d454e6a8473777454f4714b4b735cd44f8ed236139"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.442381 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3780-account-create-update-nzcdq" event={"ID":"d40378c2-5f58-4240-8b46-9b8503574a70","Type":"ContainerDied","Data":"6c5fb8cce349106ba4c2fdcebaed57fb169a863f33039a1e7c83c75fedb40784"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.442466 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c5fb8cce349106ba4c2fdcebaed57fb169a863f33039a1e7c83c75fedb40784" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.442522 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3780-account-create-update-nzcdq" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.457072 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fgcv6" event={"ID":"7bdf6984-caf0-430e-bb97-52e25019aa8f","Type":"ContainerDied","Data":"f2bceafbeca9529bc0a824a3c0784d19ec6d54d670c7447ae614dbd31d622e2a"} Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.457111 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgcv6" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.457126 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2bceafbeca9529bc0a824a3c0784d19ec6d54d670c7447ae614dbd31d622e2a" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.480062 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bzbx8" podStartSLOduration=6.054156706 podStartE2EDuration="10.480040451s" podCreationTimestamp="2025-11-28 21:09:39 +0000 UTC" firstStartedPulling="2025-11-28 21:09:44.139416456 +0000 UTC m=+1223.608064365" lastFinishedPulling="2025-11-28 21:09:48.565300211 +0000 UTC m=+1228.033948110" observedRunningTime="2025-11-28 21:09:49.469791232 +0000 UTC m=+1228.938439141" watchObservedRunningTime="2025-11-28 21:09:49.480040451 +0000 UTC m=+1228.948688360" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.483070 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.983644568 podStartE2EDuration="52.483059344s" podCreationTimestamp="2025-11-28 21:08:57 +0000 UTC" firstStartedPulling="2025-11-28 21:08:59.073464628 +0000 UTC m=+1178.542112537" lastFinishedPulling="2025-11-28 21:09:14.572879404 +0000 UTC m=+1194.041527313" observedRunningTime="2025-11-28 21:09:49.440322037 +0000 UTC m=+1228.908969936" watchObservedRunningTime="2025-11-28 21:09:49.483059344 +0000 UTC m=+1228.951707253" Nov 28 21:09:49 crc kubenswrapper[4957]: I1128 21:09:49.497011 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.024550374 podStartE2EDuration="52.496993182s" podCreationTimestamp="2025-11-28 21:08:57 +0000 UTC" firstStartedPulling="2025-11-28 21:09:02.068805279 +0000 UTC m=+1181.537453188" lastFinishedPulling="2025-11-28 21:09:14.541248087 +0000 UTC m=+1194.009895996" observedRunningTime="2025-11-28 21:09:49.48989495 +0000 UTC m=+1228.958542859" watchObservedRunningTime="2025-11-28 21:09:49.496993182 +0000 UTC m=+1228.965641091" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.185808 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-ddc747d85-tg9cf" podUID="de78648e-7956-4dc9-8905-28dc0700ff40" containerName="console" containerID="cri-o://bc68ed8834f2de1a617e326c198e646610a1e1928ad1ccb42cfba54e75218653" gracePeriod=15 Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.353491 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:09:51 crc kubenswrapper[4957]: E1128 21:09:51.353657 4957 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 21:09:51 crc kubenswrapper[4957]: E1128 21:09:51.353840 4957 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 21:09:51 crc kubenswrapper[4957]: E1128 21:09:51.353910 4957 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift podName:ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5 nodeName:}" failed. No retries permitted until 2025-11-28 21:10:07.353887737 +0000 UTC m=+1246.822535646 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift") pod "swift-storage-0" (UID: "ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5") : configmap "swift-ring-files" not found Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.492765 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-ddc747d85-tg9cf_de78648e-7956-4dc9-8905-28dc0700ff40/console/0.log" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.492825 4957 generic.go:334] "Generic (PLEG): container finished" podID="de78648e-7956-4dc9-8905-28dc0700ff40" containerID="bc68ed8834f2de1a617e326c198e646610a1e1928ad1ccb42cfba54e75218653" exitCode=2 Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.492861 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ddc747d85-tg9cf" event={"ID":"de78648e-7956-4dc9-8905-28dc0700ff40","Type":"ContainerDied","Data":"bc68ed8834f2de1a617e326c198e646610a1e1928ad1ccb42cfba54e75218653"} Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.856699 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-ddc747d85-tg9cf_de78648e-7956-4dc9-8905-28dc0700ff40/console/0.log" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.857048 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.969914 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b22b\" (UniqueName: \"kubernetes.io/projected/de78648e-7956-4dc9-8905-28dc0700ff40-kube-api-access-4b22b\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.970006 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-service-ca\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.970063 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-console-config\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.970082 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-trusted-ca-bundle\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.970166 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-oauth-config\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.970285 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-serving-cert\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.970325 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-oauth-serving-cert\") pod \"de78648e-7956-4dc9-8905-28dc0700ff40\" (UID: \"de78648e-7956-4dc9-8905-28dc0700ff40\") " Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.972658 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.973117 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-service-ca" (OuterVolumeSpecName: "service-ca") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.973515 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-console-config" (OuterVolumeSpecName: "console-config") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.973928 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.981919 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.986383 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:09:51 crc kubenswrapper[4957]: I1128 21:09:51.986410 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de78648e-7956-4dc9-8905-28dc0700ff40-kube-api-access-4b22b" (OuterVolumeSpecName: "kube-api-access-4b22b") pod "de78648e-7956-4dc9-8905-28dc0700ff40" (UID: "de78648e-7956-4dc9-8905-28dc0700ff40"). InnerVolumeSpecName "kube-api-access-4b22b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072751 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b22b\" (UniqueName: \"kubernetes.io/projected/de78648e-7956-4dc9-8905-28dc0700ff40-kube-api-access-4b22b\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072784 4957 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072794 4957 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072803 4957 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072812 4957 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072820 4957 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de78648e-7956-4dc9-8905-28dc0700ff40-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.072830 4957 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de78648e-7956-4dc9-8905-28dc0700ff40-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.417738 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-l6jkj"] Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418519 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418537 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418553 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7330b121-68e0-43b8-ba88-a8c6f08c878b" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418560 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7330b121-68e0-43b8-ba88-a8c6f08c878b" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418582 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bdf335-aae2-44c6-927e-f6d9225d5970" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418590 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bdf335-aae2-44c6-927e-f6d9225d5970" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418605 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" containerName="init" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418612 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" containerName="init" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418632 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" containerName="dnsmasq-dns" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418640 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" containerName="dnsmasq-dns" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418653 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b166cebe-60ab-4a64-843d-7f4c0586c788" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418659 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b166cebe-60ab-4a64-843d-7f4c0586c788" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418672 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418679 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418696 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9a05c8-f27b-4636-a835-ceb3a38cf708" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418705 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9a05c8-f27b-4636-a835-ceb3a38cf708" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418716 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de78648e-7956-4dc9-8905-28dc0700ff40" containerName="console" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418723 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="de78648e-7956-4dc9-8905-28dc0700ff40" containerName="console" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418735 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40378c2-5f58-4240-8b46-9b8503574a70" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418744 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40378c2-5f58-4240-8b46-9b8503574a70" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: E1128 21:09:52.418759 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdf6984-caf0-430e-bb97-52e25019aa8f" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.418767 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdf6984-caf0-430e-bb97-52e25019aa8f" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419005 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdf6984-caf0-430e-bb97-52e25019aa8f" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419025 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="de78648e-7956-4dc9-8905-28dc0700ff40" containerName="console" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419035 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b34f895-848b-4d42-bacc-04dd981362c9" containerName="dnsmasq-dns" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419048 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b166cebe-60ab-4a64-843d-7f4c0586c788" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419062 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7330b121-68e0-43b8-ba88-a8c6f08c878b" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419096 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40378c2-5f58-4240-8b46-9b8503574a70" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419110 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9a05c8-f27b-4636-a835-ceb3a38cf708" containerName="mariadb-account-create-update" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419126 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419138 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bdf335-aae2-44c6-927e-f6d9225d5970" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419148 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" containerName="mariadb-database-create" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.419956 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.429638 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.429850 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-26vwp" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.432416 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l6jkj"] Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.484509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-db-sync-config-data\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.484588 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-combined-ca-bundle\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.484653 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gbnn\" (UniqueName: \"kubernetes.io/projected/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-kube-api-access-5gbnn\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.484777 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-config-data\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.510290 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerStarted","Data":"9d268751f5e2acd92321ba75b64bbeb8f207348ba78549b9aeffdafcc28e4c8c"} Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.514052 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-ddc747d85-tg9cf_de78648e-7956-4dc9-8905-28dc0700ff40/console/0.log" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.514107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ddc747d85-tg9cf" event={"ID":"de78648e-7956-4dc9-8905-28dc0700ff40","Type":"ContainerDied","Data":"95537fd95b8df43437f7af7f455a7165cf440491757e5d944aa92cb62a411533"} Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.514142 4957 scope.go:117] "RemoveContainer" containerID="bc68ed8834f2de1a617e326c198e646610a1e1928ad1ccb42cfba54e75218653" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.514283 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ddc747d85-tg9cf" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.557953 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.103969097 podStartE2EDuration="48.557928655s" podCreationTimestamp="2025-11-28 21:09:04 +0000 UTC" firstStartedPulling="2025-11-28 21:09:15.084175148 +0000 UTC m=+1194.552823057" lastFinishedPulling="2025-11-28 21:09:51.538134706 +0000 UTC m=+1231.006782615" observedRunningTime="2025-11-28 21:09:52.548855665 +0000 UTC m=+1232.017503584" watchObservedRunningTime="2025-11-28 21:09:52.557928655 +0000 UTC m=+1232.026576564" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.577539 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-ddc747d85-tg9cf"] Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.586503 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-combined-ca-bundle\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.586824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gbnn\" (UniqueName: \"kubernetes.io/projected/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-kube-api-access-5gbnn\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.587045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-config-data\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.587158 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-ddc747d85-tg9cf"] Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.587176 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-db-sync-config-data\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.591186 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-combined-ca-bundle\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.591353 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-config-data\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.591720 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-db-sync-config-data\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.608358 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gbnn\" (UniqueName: \"kubernetes.io/projected/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-kube-api-access-5gbnn\") pod \"glance-db-sync-l6jkj\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.760341 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6jkj" Nov 28 21:09:52 crc kubenswrapper[4957]: I1128 21:09:52.827323 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de78648e-7956-4dc9-8905-28dc0700ff40" path="/var/lib/kubelet/pods/de78648e-7956-4dc9-8905-28dc0700ff40/volumes" Nov 28 21:09:53 crc kubenswrapper[4957]: I1128 21:09:53.942508 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-l6jkj"] Nov 28 21:09:53 crc kubenswrapper[4957]: W1128 21:09:53.957395 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8aef833_7bf5_4ae4_9fc9_62e1bf24871f.slice/crio-d6636d722fbc31d73fdc61a8421586aa08cfce7bfb23ad3e618ab3b2aecb254c WatchSource:0}: Error finding container d6636d722fbc31d73fdc61a8421586aa08cfce7bfb23ad3e618ab3b2aecb254c: Status 404 returned error can't find the container with id d6636d722fbc31d73fdc61a8421586aa08cfce7bfb23ad3e618ab3b2aecb254c Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.536702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6jkj" event={"ID":"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f","Type":"ContainerStarted","Data":"d6636d722fbc31d73fdc61a8421586aa08cfce7bfb23ad3e618ab3b2aecb254c"} Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.621384 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh"] Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.622657 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.639584 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh"] Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.724188 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5de5502-9325-48ca-8ced-377dd347c155-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-dl6fh\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.724608 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdgv\" (UniqueName: \"kubernetes.io/projected/c5de5502-9325-48ca-8ced-377dd347c155-kube-api-access-njdgv\") pod \"mysqld-exporter-openstack-cell1-db-create-dl6fh\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.824828 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-8468-account-create-update-sbddd"] Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.826247 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.827318 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5de5502-9325-48ca-8ced-377dd347c155-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-dl6fh\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.827513 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njdgv\" (UniqueName: \"kubernetes.io/projected/c5de5502-9325-48ca-8ced-377dd347c155-kube-api-access-njdgv\") pod \"mysqld-exporter-openstack-cell1-db-create-dl6fh\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.828253 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5de5502-9325-48ca-8ced-377dd347c155-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-dl6fh\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.830494 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.838855 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8468-account-create-update-sbddd"] Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.858883 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njdgv\" (UniqueName: \"kubernetes.io/projected/c5de5502-9325-48ca-8ced-377dd347c155-kube-api-access-njdgv\") pod \"mysqld-exporter-openstack-cell1-db-create-dl6fh\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.929437 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mcdq\" (UniqueName: \"kubernetes.io/projected/1018b31e-7e63-4e56-9ec8-29bcf03aa787-kube-api-access-4mcdq\") pod \"mysqld-exporter-8468-account-create-update-sbddd\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.929721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1018b31e-7e63-4e56-9ec8-29bcf03aa787-operator-scripts\") pod \"mysqld-exporter-8468-account-create-update-sbddd\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:54 crc kubenswrapper[4957]: I1128 21:09:54.943170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.032420 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1018b31e-7e63-4e56-9ec8-29bcf03aa787-operator-scripts\") pod \"mysqld-exporter-8468-account-create-update-sbddd\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.032885 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mcdq\" (UniqueName: \"kubernetes.io/projected/1018b31e-7e63-4e56-9ec8-29bcf03aa787-kube-api-access-4mcdq\") pod \"mysqld-exporter-8468-account-create-update-sbddd\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.033853 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1018b31e-7e63-4e56-9ec8-29bcf03aa787-operator-scripts\") pod \"mysqld-exporter-8468-account-create-update-sbddd\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.055894 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mcdq\" (UniqueName: \"kubernetes.io/projected/1018b31e-7e63-4e56-9ec8-29bcf03aa787-kube-api-access-4mcdq\") pod \"mysqld-exporter-8468-account-create-update-sbddd\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.154547 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.400798 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh"] Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.551634 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" event={"ID":"c5de5502-9325-48ca-8ced-377dd347c155","Type":"ContainerStarted","Data":"60303f3ae602e3c26f22e0512da50d208478c9f974ea7f2ac1278cffa3452b3a"} Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.643696 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8468-account-create-update-sbddd"] Nov 28 21:09:55 crc kubenswrapper[4957]: W1128 21:09:55.647784 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1018b31e_7e63_4e56_9ec8_29bcf03aa787.slice/crio-2c94e4a3070ddca60d43c00508034a73142fe0af8e981e0c685a3fdc9c57dad8 WatchSource:0}: Error finding container 2c94e4a3070ddca60d43c00508034a73142fe0af8e981e0c685a3fdc9c57dad8: Status 404 returned error can't find the container with id 2c94e4a3070ddca60d43c00508034a73142fe0af8e981e0c685a3fdc9c57dad8 Nov 28 21:09:55 crc kubenswrapper[4957]: I1128 21:09:55.770742 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.569098 4957 generic.go:334] "Generic (PLEG): container finished" podID="1018b31e-7e63-4e56-9ec8-29bcf03aa787" containerID="b558e70ce8e2b3cfc56a5f374134361512016b91e23031ef5c926fcabf100440" exitCode=0 Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.569481 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" event={"ID":"1018b31e-7e63-4e56-9ec8-29bcf03aa787","Type":"ContainerDied","Data":"b558e70ce8e2b3cfc56a5f374134361512016b91e23031ef5c926fcabf100440"} Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.569514 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" event={"ID":"1018b31e-7e63-4e56-9ec8-29bcf03aa787","Type":"ContainerStarted","Data":"2c94e4a3070ddca60d43c00508034a73142fe0af8e981e0c685a3fdc9c57dad8"} Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.574909 4957 generic.go:334] "Generic (PLEG): container finished" podID="c5de5502-9325-48ca-8ced-377dd347c155" containerID="f2658daf04de09433dab00380155ff87c0074e7efa09909506f38e2b7eee1331" exitCode=0 Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.574974 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" event={"ID":"c5de5502-9325-48ca-8ced-377dd347c155","Type":"ContainerDied","Data":"f2658daf04de09433dab00380155ff87c0074e7efa09909506f38e2b7eee1331"} Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.579185 4957 generic.go:334] "Generic (PLEG): container finished" podID="93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" containerID="1f1bbc4f03e63c02ca55a0d454e6a8473777454f4714b4b735cd44f8ed236139" exitCode=0 Nov 28 21:09:56 crc kubenswrapper[4957]: I1128 21:09:56.579268 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bzbx8" event={"ID":"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7","Type":"ContainerDied","Data":"1f1bbc4f03e63c02ca55a0d454e6a8473777454f4714b4b735cd44f8ed236139"} Nov 28 21:09:57 crc kubenswrapper[4957]: I1128 21:09:57.953395 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dzt8d" podUID="6bc9960f-fdff-42fa-8cdd-4ec0d88f359d" containerName="ovn-controller" probeResult="failure" output=< Nov 28 21:09:57 crc kubenswrapper[4957]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 21:09:57 crc kubenswrapper[4957]: > Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.131032 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.241472 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.247039 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.291475 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1018b31e-7e63-4e56-9ec8-29bcf03aa787-operator-scripts\") pod \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.292036 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1018b31e-7e63-4e56-9ec8-29bcf03aa787-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1018b31e-7e63-4e56-9ec8-29bcf03aa787" (UID: "1018b31e-7e63-4e56-9ec8-29bcf03aa787"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.292082 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mcdq\" (UniqueName: \"kubernetes.io/projected/1018b31e-7e63-4e56-9ec8-29bcf03aa787-kube-api-access-4mcdq\") pod \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\" (UID: \"1018b31e-7e63-4e56-9ec8-29bcf03aa787\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.292660 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1018b31e-7e63-4e56-9ec8-29bcf03aa787-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.298813 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1018b31e-7e63-4e56-9ec8-29bcf03aa787-kube-api-access-4mcdq" (OuterVolumeSpecName: "kube-api-access-4mcdq") pod "1018b31e-7e63-4e56-9ec8-29bcf03aa787" (UID: "1018b31e-7e63-4e56-9ec8-29bcf03aa787"). InnerVolumeSpecName "kube-api-access-4mcdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.394004 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-dispersionconf\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.394078 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-swiftconf\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.394595 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvc7j\" (UniqueName: \"kubernetes.io/projected/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-kube-api-access-pvc7j\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.395009 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-ring-data-devices\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.395059 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-scripts\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.395105 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-etc-swift\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.395160 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5de5502-9325-48ca-8ced-377dd347c155-operator-scripts\") pod \"c5de5502-9325-48ca-8ced-377dd347c155\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.395794 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.395860 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5de5502-9325-48ca-8ced-377dd347c155-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5de5502-9325-48ca-8ced-377dd347c155" (UID: "c5de5502-9325-48ca-8ced-377dd347c155"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.396223 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.396442 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njdgv\" (UniqueName: \"kubernetes.io/projected/c5de5502-9325-48ca-8ced-377dd347c155-kube-api-access-njdgv\") pod \"c5de5502-9325-48ca-8ced-377dd347c155\" (UID: \"c5de5502-9325-48ca-8ced-377dd347c155\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.396510 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-combined-ca-bundle\") pod \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\" (UID: \"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7\") " Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.398196 4957 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.398251 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mcdq\" (UniqueName: \"kubernetes.io/projected/1018b31e-7e63-4e56-9ec8-29bcf03aa787-kube-api-access-4mcdq\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.398262 4957 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.398273 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5de5502-9325-48ca-8ced-377dd347c155-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.405721 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5de5502-9325-48ca-8ced-377dd347c155-kube-api-access-njdgv" (OuterVolumeSpecName: "kube-api-access-njdgv") pod "c5de5502-9325-48ca-8ced-377dd347c155" (UID: "c5de5502-9325-48ca-8ced-377dd347c155"). InnerVolumeSpecName "kube-api-access-njdgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.406048 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-kube-api-access-pvc7j" (OuterVolumeSpecName: "kube-api-access-pvc7j") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "kube-api-access-pvc7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.410599 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.420010 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-scripts" (OuterVolumeSpecName: "scripts") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.423097 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.446817 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" (UID: "93cfcc7a-cedc-4adc-abb3-eef0aec22ae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.500231 4957 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.500274 4957 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.500286 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvc7j\" (UniqueName: \"kubernetes.io/projected/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-kube-api-access-pvc7j\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.500296 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.500305 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njdgv\" (UniqueName: \"kubernetes.io/projected/c5de5502-9325-48ca-8ced-377dd347c155-kube-api-access-njdgv\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.500313 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93cfcc7a-cedc-4adc-abb3-eef0aec22ae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.599343 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.601786 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" event={"ID":"1018b31e-7e63-4e56-9ec8-29bcf03aa787","Type":"ContainerDied","Data":"2c94e4a3070ddca60d43c00508034a73142fe0af8e981e0c685a3fdc9c57dad8"} Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.601815 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c94e4a3070ddca60d43c00508034a73142fe0af8e981e0c685a3fdc9c57dad8" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.601856 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8468-account-create-update-sbddd" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.605990 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.607285 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh" event={"ID":"c5de5502-9325-48ca-8ced-377dd347c155","Type":"ContainerDied","Data":"60303f3ae602e3c26f22e0512da50d208478c9f974ea7f2ac1278cffa3452b3a"} Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.607337 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60303f3ae602e3c26f22e0512da50d208478c9f974ea7f2ac1278cffa3452b3a" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.608935 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bzbx8" event={"ID":"93cfcc7a-cedc-4adc-abb3-eef0aec22ae7","Type":"ContainerDied","Data":"a692e1a8e70bae249ba773538f4ccf4d944e9f48abe5e420f3dbc8fd860ff11c"} Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.608960 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a692e1a8e70bae249ba773538f4ccf4d944e9f48abe5e420f3dbc8fd860ff11c" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.609035 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bzbx8" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855131 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-ts4r7"] Nov 28 21:09:58 crc kubenswrapper[4957]: E1128 21:09:58.855559 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5de5502-9325-48ca-8ced-377dd347c155" containerName="mariadb-database-create" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855570 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5de5502-9325-48ca-8ced-377dd347c155" containerName="mariadb-database-create" Nov 28 21:09:58 crc kubenswrapper[4957]: E1128 21:09:58.855596 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" containerName="swift-ring-rebalance" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855603 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" containerName="swift-ring-rebalance" Nov 28 21:09:58 crc kubenswrapper[4957]: E1128 21:09:58.855617 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1018b31e-7e63-4e56-9ec8-29bcf03aa787" containerName="mariadb-account-create-update" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855623 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1018b31e-7e63-4e56-9ec8-29bcf03aa787" containerName="mariadb-account-create-update" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855794 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cfcc7a-cedc-4adc-abb3-eef0aec22ae7" containerName="swift-ring-rebalance" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855807 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5de5502-9325-48ca-8ced-377dd347c155" containerName="mariadb-database-create" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.855829 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1018b31e-7e63-4e56-9ec8-29bcf03aa787" containerName="mariadb-account-create-update" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.856472 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.882944 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:09:58 crc kubenswrapper[4957]: I1128 21:09:58.888637 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ts4r7"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.009506 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngznd\" (UniqueName: \"kubernetes.io/projected/f9b34c44-6e04-4e3f-8147-9616e4003021-kube-api-access-ngznd\") pod \"heat-db-create-ts4r7\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.009575 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b34c44-6e04-4e3f-8147-9616e4003021-operator-scripts\") pod \"heat-db-create-ts4r7\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.087380 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8a06-account-create-update-r2k25"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.088972 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.098002 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.111688 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngznd\" (UniqueName: \"kubernetes.io/projected/f9b34c44-6e04-4e3f-8147-9616e4003021-kube-api-access-ngznd\") pod \"heat-db-create-ts4r7\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.111736 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b34c44-6e04-4e3f-8147-9616e4003021-operator-scripts\") pod \"heat-db-create-ts4r7\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.112574 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b34c44-6e04-4e3f-8147-9616e4003021-operator-scripts\") pod \"heat-db-create-ts4r7\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.130278 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8a06-account-create-update-r2k25"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.167912 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngznd\" (UniqueName: \"kubernetes.io/projected/f9b34c44-6e04-4e3f-8147-9616e4003021-kube-api-access-ngznd\") pod \"heat-db-create-ts4r7\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.211568 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ts4r7" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.246354 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4vmcg"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.248936 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.250986 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8xp\" (UniqueName: \"kubernetes.io/projected/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-kube-api-access-ds8xp\") pod \"heat-8a06-account-create-update-r2k25\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.254600 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-operator-scripts\") pod \"heat-8a06-account-create-update-r2k25\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.348227 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vmcg"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.356744 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j986g\" (UniqueName: \"kubernetes.io/projected/84499c1d-3d44-4911-85b4-c1dafdb93b03-kube-api-access-j986g\") pod \"cinder-db-create-4vmcg\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.356792 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds8xp\" (UniqueName: \"kubernetes.io/projected/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-kube-api-access-ds8xp\") pod \"heat-8a06-account-create-update-r2k25\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.356856 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84499c1d-3d44-4911-85b4-c1dafdb93b03-operator-scripts\") pod \"cinder-db-create-4vmcg\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.356904 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-operator-scripts\") pod \"heat-8a06-account-create-update-r2k25\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.371089 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-operator-scripts\") pod \"heat-8a06-account-create-update-r2k25\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.374259 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-de40-account-create-update-6c8ts"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.375625 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.377701 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.391154 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds8xp\" (UniqueName: \"kubernetes.io/projected/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-kube-api-access-ds8xp\") pod \"heat-8a06-account-create-update-r2k25\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.408334 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-de40-account-create-update-6c8ts"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.423158 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.433858 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d5btv"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.435304 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.461046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6j2\" (UniqueName: \"kubernetes.io/projected/f2ddced4-cb68-46ae-a929-a528b32c5ed5-kube-api-access-nh6j2\") pod \"barbican-de40-account-create-update-6c8ts\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.461086 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j986g\" (UniqueName: \"kubernetes.io/projected/84499c1d-3d44-4911-85b4-c1dafdb93b03-kube-api-access-j986g\") pod \"cinder-db-create-4vmcg\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.461315 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2ddced4-cb68-46ae-a929-a528b32c5ed5-operator-scripts\") pod \"barbican-de40-account-create-update-6c8ts\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.461741 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84499c1d-3d44-4911-85b4-c1dafdb93b03-operator-scripts\") pod \"cinder-db-create-4vmcg\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.462748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84499c1d-3d44-4911-85b4-c1dafdb93b03-operator-scripts\") pod \"cinder-db-create-4vmcg\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.477234 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rb757"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.479638 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.486172 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zhjp" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.486399 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.486494 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.486569 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.490274 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-65b7-account-create-update-g5twz"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.491879 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.492526 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j986g\" (UniqueName: \"kubernetes.io/projected/84499c1d-3d44-4911-85b4-c1dafdb93b03-kube-api-access-j986g\") pod \"cinder-db-create-4vmcg\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.494721 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.510690 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d5btv"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.523677 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rb757"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.532696 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-65b7-account-create-update-g5twz"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564075 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-combined-ca-bundle\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564192 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-config-data\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564336 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclzt\" (UniqueName: \"kubernetes.io/projected/8196ca60-081d-4a36-acaf-d7e019bf2b12-kube-api-access-mclzt\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564415 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzc8\" (UniqueName: \"kubernetes.io/projected/2b0f6f45-582e-4856-8bec-00097857b539-kube-api-access-6fzc8\") pod \"cinder-65b7-account-create-update-g5twz\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564606 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6j2\" (UniqueName: \"kubernetes.io/projected/f2ddced4-cb68-46ae-a929-a528b32c5ed5-kube-api-access-nh6j2\") pod \"barbican-de40-account-create-update-6c8ts\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564762 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-operator-scripts\") pod \"barbican-db-create-d5btv\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564810 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f6f45-582e-4856-8bec-00097857b539-operator-scripts\") pod \"cinder-65b7-account-create-update-g5twz\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564870 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2ddced4-cb68-46ae-a929-a528b32c5ed5-operator-scripts\") pod \"barbican-de40-account-create-update-6c8ts\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.564937 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkcz\" (UniqueName: \"kubernetes.io/projected/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-kube-api-access-xrkcz\") pod \"barbican-db-create-d5btv\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.565746 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2ddced4-cb68-46ae-a929-a528b32c5ed5-operator-scripts\") pod \"barbican-de40-account-create-update-6c8ts\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.587763 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6j2\" (UniqueName: \"kubernetes.io/projected/f2ddced4-cb68-46ae-a929-a528b32c5ed5-kube-api-access-nh6j2\") pod \"barbican-de40-account-create-update-6c8ts\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.666653 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-config-data\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.666725 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclzt\" (UniqueName: \"kubernetes.io/projected/8196ca60-081d-4a36-acaf-d7e019bf2b12-kube-api-access-mclzt\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.666766 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzc8\" (UniqueName: \"kubernetes.io/projected/2b0f6f45-582e-4856-8bec-00097857b539-kube-api-access-6fzc8\") pod \"cinder-65b7-account-create-update-g5twz\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.666837 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-operator-scripts\") pod \"barbican-db-create-d5btv\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.666869 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f6f45-582e-4856-8bec-00097857b539-operator-scripts\") pod \"cinder-65b7-account-create-update-g5twz\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.666901 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrkcz\" (UniqueName: \"kubernetes.io/projected/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-kube-api-access-xrkcz\") pod \"barbican-db-create-d5btv\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.667729 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-combined-ca-bundle\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.667606 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-operator-scripts\") pod \"barbican-db-create-d5btv\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.667667 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f6f45-582e-4856-8bec-00097857b539-operator-scripts\") pod \"cinder-65b7-account-create-update-g5twz\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.672749 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vmcg" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.675319 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qwxh2"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.676161 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-combined-ca-bundle\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.676695 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qwxh2" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.681969 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-config-data\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.693903 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-48df-account-create-update-r4h6j"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.695234 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.697494 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzc8\" (UniqueName: \"kubernetes.io/projected/2b0f6f45-582e-4856-8bec-00097857b539-kube-api-access-6fzc8\") pod \"cinder-65b7-account-create-update-g5twz\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.700953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclzt\" (UniqueName: \"kubernetes.io/projected/8196ca60-081d-4a36-acaf-d7e019bf2b12-kube-api-access-mclzt\") pod \"keystone-db-sync-rb757\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.701235 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.704809 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrkcz\" (UniqueName: \"kubernetes.io/projected/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-kube-api-access-xrkcz\") pod \"barbican-db-create-d5btv\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.715137 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qwxh2"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.726866 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-48df-account-create-update-r4h6j"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.762279 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.772246 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccsq8\" (UniqueName: \"kubernetes.io/projected/a74265c6-4eb6-45a1-aa83-b0656eed2247-kube-api-access-ccsq8\") pod \"neutron-48df-account-create-update-r4h6j\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.772401 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74265c6-4eb6-45a1-aa83-b0656eed2247-operator-scripts\") pod \"neutron-48df-account-create-update-r4h6j\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.772430 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvsw7\" (UniqueName: \"kubernetes.io/projected/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-kube-api-access-dvsw7\") pod \"neutron-db-create-qwxh2\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " pod="openstack/neutron-db-create-qwxh2" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.772464 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-operator-scripts\") pod \"neutron-db-create-qwxh2\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " pod="openstack/neutron-db-create-qwxh2" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.774346 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d5btv" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.803099 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb757" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.822429 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.874033 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccsq8\" (UniqueName: \"kubernetes.io/projected/a74265c6-4eb6-45a1-aa83-b0656eed2247-kube-api-access-ccsq8\") pod \"neutron-48df-account-create-update-r4h6j\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.874485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74265c6-4eb6-45a1-aa83-b0656eed2247-operator-scripts\") pod \"neutron-48df-account-create-update-r4h6j\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.874526 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvsw7\" (UniqueName: \"kubernetes.io/projected/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-kube-api-access-dvsw7\") pod \"neutron-db-create-qwxh2\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " pod="openstack/neutron-db-create-qwxh2" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.874562 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-operator-scripts\") pod \"neutron-db-create-qwxh2\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " pod="openstack/neutron-db-create-qwxh2" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.875192 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-operator-scripts\") pod \"neutron-db-create-qwxh2\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " pod="openstack/neutron-db-create-qwxh2" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.875603 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74265c6-4eb6-45a1-aa83-b0656eed2247-operator-scripts\") pod \"neutron-48df-account-create-update-r4h6j\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.876882 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ts4r7"] Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.894762 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccsq8\" (UniqueName: \"kubernetes.io/projected/a74265c6-4eb6-45a1-aa83-b0656eed2247-kube-api-access-ccsq8\") pod \"neutron-48df-account-create-update-r4h6j\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:09:59 crc kubenswrapper[4957]: I1128 21:09:59.895169 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvsw7\" (UniqueName: \"kubernetes.io/projected/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-kube-api-access-dvsw7\") pod \"neutron-db-create-qwxh2\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " pod="openstack/neutron-db-create-qwxh2" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.019386 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qwxh2" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.044969 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.112973 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8a06-account-create-update-r2k25"] Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.167112 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.168784 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.171291 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.173469 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.287467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.287545 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-config-data\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.287665 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtp9j\" (UniqueName: \"kubernetes.io/projected/763de9cf-5d74-4977-b6d5-53430185b17b-kube-api-access-qtp9j\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.315024 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vmcg"] Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.391527 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtp9j\" (UniqueName: \"kubernetes.io/projected/763de9cf-5d74-4977-b6d5-53430185b17b-kube-api-access-qtp9j\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.391633 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.391672 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-config-data\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.399901 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-config-data\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.405941 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.479584 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d5btv"] Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.498228 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtp9j\" (UniqueName: \"kubernetes.io/projected/763de9cf-5d74-4977-b6d5-53430185b17b-kube-api-access-qtp9j\") pod \"mysqld-exporter-0\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.641526 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.648642 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8a06-account-create-update-r2k25" event={"ID":"7dab3e4c-800d-4848-9ff2-c5aed4c6a820","Type":"ContainerStarted","Data":"dce8fce0a983a116e26796c48b548c971be60b5a5f8b9524ceaadf3db0dd0e89"} Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.668772 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ts4r7" event={"ID":"f9b34c44-6e04-4e3f-8147-9616e4003021","Type":"ContainerStarted","Data":"ee95466286fb6514283f70dc7131ab844ffe2e2ae07cd52e3aa34ef32d37d8d9"} Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.688466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vmcg" event={"ID":"84499c1d-3d44-4911-85b4-c1dafdb93b03","Type":"ContainerStarted","Data":"2c9391ea6402584157426da937b3885f2db15c1b04a88f0384d0b0d67daa65fb"} Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.697107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d5btv" event={"ID":"9cf99417-e04e-43cd-87d4-86b1bd8f33cc","Type":"ContainerStarted","Data":"3c55e95e72addb9dd802e79074836b7764848db951499fbb02ba9dd0741c2f2a"} Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.885585 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-de40-account-create-update-6c8ts"] Nov 28 21:10:00 crc kubenswrapper[4957]: I1128 21:10:00.994430 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rb757"] Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.125023 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-48df-account-create-update-r4h6j"] Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.175047 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-65b7-account-create-update-g5twz"] Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.408237 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qwxh2"] Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.583437 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.716115 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9b34c44-6e04-4e3f-8147-9616e4003021" containerID="525a7e0ea4496bde619cbaf479dd245215cc56bc1cde1c64e5fb2f0de91a6b0f" exitCode=0 Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.716232 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ts4r7" event={"ID":"f9b34c44-6e04-4e3f-8147-9616e4003021","Type":"ContainerDied","Data":"525a7e0ea4496bde619cbaf479dd245215cc56bc1cde1c64e5fb2f0de91a6b0f"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.718176 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"763de9cf-5d74-4977-b6d5-53430185b17b","Type":"ContainerStarted","Data":"9e89f316ece9500e28ccd89e1be23fc27b76246e889e5a8a9011b767362492a3"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.732758 4957 generic.go:334] "Generic (PLEG): container finished" podID="9cf99417-e04e-43cd-87d4-86b1bd8f33cc" containerID="4c2cd7f0c88891fde60f93c5b42bdca6caa45529ff5eef345e436b6eda113ded" exitCode=0 Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.732830 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d5btv" event={"ID":"9cf99417-e04e-43cd-87d4-86b1bd8f33cc","Type":"ContainerDied","Data":"4c2cd7f0c88891fde60f93c5b42bdca6caa45529ff5eef345e436b6eda113ded"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.737416 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb757" event={"ID":"8196ca60-081d-4a36-acaf-d7e019bf2b12","Type":"ContainerStarted","Data":"c0a6e5f3096ae545bb4dc403c92ee12c84245436106e855c39973d9508171309"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.746569 4957 generic.go:334] "Generic (PLEG): container finished" podID="7dab3e4c-800d-4848-9ff2-c5aed4c6a820" containerID="8986c22e1ac6d67e7a3e69087cc610d29db36c6a606cf8eda08466ad7d3c025c" exitCode=0 Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.746715 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8a06-account-create-update-r2k25" event={"ID":"7dab3e4c-800d-4848-9ff2-c5aed4c6a820","Type":"ContainerDied","Data":"8986c22e1ac6d67e7a3e69087cc610d29db36c6a606cf8eda08466ad7d3c025c"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.750949 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-65b7-account-create-update-g5twz" event={"ID":"2b0f6f45-582e-4856-8bec-00097857b539","Type":"ContainerStarted","Data":"01192659a69e9bfa3f28a295c46562f957aee248047cb9ef5de042f460639535"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.759884 4957 generic.go:334] "Generic (PLEG): container finished" podID="f2ddced4-cb68-46ae-a929-a528b32c5ed5" containerID="f7d0bf9ccdda3457ec050dddee01c8d94f5dd4d38e426d5b51a5ca74ac27db08" exitCode=0 Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.759968 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-de40-account-create-update-6c8ts" event={"ID":"f2ddced4-cb68-46ae-a929-a528b32c5ed5","Type":"ContainerDied","Data":"f7d0bf9ccdda3457ec050dddee01c8d94f5dd4d38e426d5b51a5ca74ac27db08"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.759994 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-de40-account-create-update-6c8ts" event={"ID":"f2ddced4-cb68-46ae-a929-a528b32c5ed5","Type":"ContainerStarted","Data":"bb61b088d03c4ade245fc9a48d57ecacdcfcd42178323a1fb6a5c48e50056574"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.771866 4957 generic.go:334] "Generic (PLEG): container finished" podID="84499c1d-3d44-4911-85b4-c1dafdb93b03" containerID="a207617a8722773beda4b1970512bf431eb8d8821a83bccc0c4b10d9cb9b5b45" exitCode=0 Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.771961 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vmcg" event={"ID":"84499c1d-3d44-4911-85b4-c1dafdb93b03","Type":"ContainerDied","Data":"a207617a8722773beda4b1970512bf431eb8d8821a83bccc0c4b10d9cb9b5b45"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.774178 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-48df-account-create-update-r4h6j" event={"ID":"a74265c6-4eb6-45a1-aa83-b0656eed2247","Type":"ContainerStarted","Data":"f5e009dd9ec17ceafc748cff82599fea24821b76f5791f923abe1f974c921494"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.780834 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qwxh2" event={"ID":"f5f5a642-e071-4830-b31e-7f0e8e7b73ef","Type":"ContainerStarted","Data":"220823a1eef5996b247d3a01d972ed12b21c61255d5d455cf6b6c0c2fdd1d63a"} Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.803031 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-65b7-account-create-update-g5twz" podStartSLOduration=2.803012867 podStartE2EDuration="2.803012867s" podCreationTimestamp="2025-11-28 21:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:01.787995587 +0000 UTC m=+1241.256643496" watchObservedRunningTime="2025-11-28 21:10:01.803012867 +0000 UTC m=+1241.271660766" Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.865623 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-48df-account-create-update-r4h6j" podStartSLOduration=2.86560255 podStartE2EDuration="2.86560255s" podCreationTimestamp="2025-11-28 21:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:01.85951952 +0000 UTC m=+1241.328167429" watchObservedRunningTime="2025-11-28 21:10:01.86560255 +0000 UTC m=+1241.334250459" Nov 28 21:10:01 crc kubenswrapper[4957]: I1128 21:10:01.879074 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-qwxh2" podStartSLOduration=2.879056191 podStartE2EDuration="2.879056191s" podCreationTimestamp="2025-11-28 21:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:01.874076399 +0000 UTC m=+1241.342724308" watchObservedRunningTime="2025-11-28 21:10:01.879056191 +0000 UTC m=+1241.347704100" Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.792430 4957 generic.go:334] "Generic (PLEG): container finished" podID="a74265c6-4eb6-45a1-aa83-b0656eed2247" containerID="8a261ec577d947cf676a7f0c644fd6d45f6f6d6b683737cae5a1e2dc2c8d0f22" exitCode=0 Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.792754 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-48df-account-create-update-r4h6j" event={"ID":"a74265c6-4eb6-45a1-aa83-b0656eed2247","Type":"ContainerDied","Data":"8a261ec577d947cf676a7f0c644fd6d45f6f6d6b683737cae5a1e2dc2c8d0f22"} Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.798098 4957 generic.go:334] "Generic (PLEG): container finished" podID="f5f5a642-e071-4830-b31e-7f0e8e7b73ef" containerID="e2d7036ada5bac93e3b4b9b51c213cb1c4118f52008ef9ff56447102af0ef019" exitCode=0 Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.798145 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qwxh2" event={"ID":"f5f5a642-e071-4830-b31e-7f0e8e7b73ef","Type":"ContainerDied","Data":"e2d7036ada5bac93e3b4b9b51c213cb1c4118f52008ef9ff56447102af0ef019"} Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.800545 4957 generic.go:334] "Generic (PLEG): container finished" podID="2b0f6f45-582e-4856-8bec-00097857b539" containerID="d335e7a0cac76ea31e3301e74e3cd633c66ed9cda6f282849a8114fd5fdd65a7" exitCode=0 Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.800697 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-65b7-account-create-update-g5twz" event={"ID":"2b0f6f45-582e-4856-8bec-00097857b539","Type":"ContainerDied","Data":"d335e7a0cac76ea31e3301e74e3cd633c66ed9cda6f282849a8114fd5fdd65a7"} Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.991791 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dzt8d" podUID="6bc9960f-fdff-42fa-8cdd-4ec0d88f359d" containerName="ovn-controller" probeResult="failure" output=< Nov 28 21:10:02 crc kubenswrapper[4957]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 21:10:02 crc kubenswrapper[4957]: > Nov 28 21:10:02 crc kubenswrapper[4957]: I1128 21:10:02.996589 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.035013 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cd25j" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.290297 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dzt8d-config-cqm2v"] Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.292658 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.295123 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.310424 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dzt8d-config-cqm2v"] Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.409247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.409339 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-scripts\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.409372 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlf5g\" (UniqueName: \"kubernetes.io/projected/fdbba8eb-280b-495a-9dee-bd6cafb74598-kube-api-access-xlf5g\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.409472 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run-ovn\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.409525 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-additional-scripts\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.409556 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-log-ovn\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.511770 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-scripts\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.511896 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlf5g\" (UniqueName: \"kubernetes.io/projected/fdbba8eb-280b-495a-9dee-bd6cafb74598-kube-api-access-xlf5g\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.512463 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run-ovn\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.512600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-additional-scripts\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.512755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-log-ovn\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.512866 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.513294 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.514097 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-scripts\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.514568 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-additional-scripts\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.514619 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run-ovn\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.514748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-log-ovn\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.541092 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlf5g\" (UniqueName: \"kubernetes.io/projected/fdbba8eb-280b-495a-9dee-bd6cafb74598-kube-api-access-xlf5g\") pod \"ovn-controller-dzt8d-config-cqm2v\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.649728 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.653201 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ts4r7" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.715810 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngznd\" (UniqueName: \"kubernetes.io/projected/f9b34c44-6e04-4e3f-8147-9616e4003021-kube-api-access-ngznd\") pod \"f9b34c44-6e04-4e3f-8147-9616e4003021\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.715995 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b34c44-6e04-4e3f-8147-9616e4003021-operator-scripts\") pod \"f9b34c44-6e04-4e3f-8147-9616e4003021\" (UID: \"f9b34c44-6e04-4e3f-8147-9616e4003021\") " Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.717374 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9b34c44-6e04-4e3f-8147-9616e4003021-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9b34c44-6e04-4e3f-8147-9616e4003021" (UID: "f9b34c44-6e04-4e3f-8147-9616e4003021"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.720106 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b34c44-6e04-4e3f-8147-9616e4003021-kube-api-access-ngznd" (OuterVolumeSpecName: "kube-api-access-ngznd") pod "f9b34c44-6e04-4e3f-8147-9616e4003021" (UID: "f9b34c44-6e04-4e3f-8147-9616e4003021"). InnerVolumeSpecName "kube-api-access-ngznd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.819606 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngznd\" (UniqueName: \"kubernetes.io/projected/f9b34c44-6e04-4e3f-8147-9616e4003021-kube-api-access-ngznd\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.819634 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9b34c44-6e04-4e3f-8147-9616e4003021-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.823599 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ts4r7" event={"ID":"f9b34c44-6e04-4e3f-8147-9616e4003021","Type":"ContainerDied","Data":"ee95466286fb6514283f70dc7131ab844ffe2e2ae07cd52e3aa34ef32d37d8d9"} Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.823644 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee95466286fb6514283f70dc7131ab844ffe2e2ae07cd52e3aa34ef32d37d8d9" Nov 28 21:10:03 crc kubenswrapper[4957]: I1128 21:10:03.823673 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ts4r7" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.132114 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d5btv" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.154811 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.162566 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.182218 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vmcg" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228249 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2ddced4-cb68-46ae-a929-a528b32c5ed5-operator-scripts\") pod \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228436 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84499c1d-3d44-4911-85b4-c1dafdb93b03-operator-scripts\") pod \"84499c1d-3d44-4911-85b4-c1dafdb93b03\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228521 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrkcz\" (UniqueName: \"kubernetes.io/projected/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-kube-api-access-xrkcz\") pod \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228557 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-operator-scripts\") pod \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228642 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh6j2\" (UniqueName: \"kubernetes.io/projected/f2ddced4-cb68-46ae-a929-a528b32c5ed5-kube-api-access-nh6j2\") pod \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\" (UID: \"f2ddced4-cb68-46ae-a929-a528b32c5ed5\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228707 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j986g\" (UniqueName: \"kubernetes.io/projected/84499c1d-3d44-4911-85b4-c1dafdb93b03-kube-api-access-j986g\") pod \"84499c1d-3d44-4911-85b4-c1dafdb93b03\" (UID: \"84499c1d-3d44-4911-85b4-c1dafdb93b03\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228740 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds8xp\" (UniqueName: \"kubernetes.io/projected/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-kube-api-access-ds8xp\") pod \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\" (UID: \"7dab3e4c-800d-4848-9ff2-c5aed4c6a820\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.228792 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-operator-scripts\") pod \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\" (UID: \"9cf99417-e04e-43cd-87d4-86b1bd8f33cc\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.230176 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cf99417-e04e-43cd-87d4-86b1bd8f33cc" (UID: "9cf99417-e04e-43cd-87d4-86b1bd8f33cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.230674 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ddced4-cb68-46ae-a929-a528b32c5ed5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2ddced4-cb68-46ae-a929-a528b32c5ed5" (UID: "f2ddced4-cb68-46ae-a929-a528b32c5ed5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.231050 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84499c1d-3d44-4911-85b4-c1dafdb93b03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84499c1d-3d44-4911-85b4-c1dafdb93b03" (UID: "84499c1d-3d44-4911-85b4-c1dafdb93b03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.232419 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dab3e4c-800d-4848-9ff2-c5aed4c6a820" (UID: "7dab3e4c-800d-4848-9ff2-c5aed4c6a820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.235311 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-kube-api-access-xrkcz" (OuterVolumeSpecName: "kube-api-access-xrkcz") pod "9cf99417-e04e-43cd-87d4-86b1bd8f33cc" (UID: "9cf99417-e04e-43cd-87d4-86b1bd8f33cc"). InnerVolumeSpecName "kube-api-access-xrkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.236141 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-kube-api-access-ds8xp" (OuterVolumeSpecName: "kube-api-access-ds8xp") pod "7dab3e4c-800d-4848-9ff2-c5aed4c6a820" (UID: "7dab3e4c-800d-4848-9ff2-c5aed4c6a820"). InnerVolumeSpecName "kube-api-access-ds8xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.236182 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84499c1d-3d44-4911-85b4-c1dafdb93b03-kube-api-access-j986g" (OuterVolumeSpecName: "kube-api-access-j986g") pod "84499c1d-3d44-4911-85b4-c1dafdb93b03" (UID: "84499c1d-3d44-4911-85b4-c1dafdb93b03"). InnerVolumeSpecName "kube-api-access-j986g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.236905 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ddced4-cb68-46ae-a929-a528b32c5ed5-kube-api-access-nh6j2" (OuterVolumeSpecName: "kube-api-access-nh6j2") pod "f2ddced4-cb68-46ae-a929-a528b32c5ed5" (UID: "f2ddced4-cb68-46ae-a929-a528b32c5ed5"). InnerVolumeSpecName "kube-api-access-nh6j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.307286 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.336390 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccsq8\" (UniqueName: \"kubernetes.io/projected/a74265c6-4eb6-45a1-aa83-b0656eed2247-kube-api-access-ccsq8\") pod \"a74265c6-4eb6-45a1-aa83-b0656eed2247\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.336934 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74265c6-4eb6-45a1-aa83-b0656eed2247-operator-scripts\") pod \"a74265c6-4eb6-45a1-aa83-b0656eed2247\" (UID: \"a74265c6-4eb6-45a1-aa83-b0656eed2247\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337747 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337766 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2ddced4-cb68-46ae-a929-a528b32c5ed5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337776 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84499c1d-3d44-4911-85b4-c1dafdb93b03-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337787 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrkcz\" (UniqueName: \"kubernetes.io/projected/9cf99417-e04e-43cd-87d4-86b1bd8f33cc-kube-api-access-xrkcz\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337796 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337805 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh6j2\" (UniqueName: \"kubernetes.io/projected/f2ddced4-cb68-46ae-a929-a528b32c5ed5-kube-api-access-nh6j2\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337814 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j986g\" (UniqueName: \"kubernetes.io/projected/84499c1d-3d44-4911-85b4-c1dafdb93b03-kube-api-access-j986g\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.337822 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds8xp\" (UniqueName: \"kubernetes.io/projected/7dab3e4c-800d-4848-9ff2-c5aed4c6a820-kube-api-access-ds8xp\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.340404 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74265c6-4eb6-45a1-aa83-b0656eed2247-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a74265c6-4eb6-45a1-aa83-b0656eed2247" (UID: "a74265c6-4eb6-45a1-aa83-b0656eed2247"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.349428 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74265c6-4eb6-45a1-aa83-b0656eed2247-kube-api-access-ccsq8" (OuterVolumeSpecName: "kube-api-access-ccsq8") pod "a74265c6-4eb6-45a1-aa83-b0656eed2247" (UID: "a74265c6-4eb6-45a1-aa83-b0656eed2247"). InnerVolumeSpecName "kube-api-access-ccsq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.413612 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.439540 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f6f45-582e-4856-8bec-00097857b539-operator-scripts\") pod \"2b0f6f45-582e-4856-8bec-00097857b539\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.439776 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzc8\" (UniqueName: \"kubernetes.io/projected/2b0f6f45-582e-4856-8bec-00097857b539-kube-api-access-6fzc8\") pod \"2b0f6f45-582e-4856-8bec-00097857b539\" (UID: \"2b0f6f45-582e-4856-8bec-00097857b539\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.440431 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccsq8\" (UniqueName: \"kubernetes.io/projected/a74265c6-4eb6-45a1-aa83-b0656eed2247-kube-api-access-ccsq8\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.440443 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74265c6-4eb6-45a1-aa83-b0656eed2247-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.441796 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0f6f45-582e-4856-8bec-00097857b539-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b0f6f45-582e-4856-8bec-00097857b539" (UID: "2b0f6f45-582e-4856-8bec-00097857b539"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.452201 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qwxh2" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.470805 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0f6f45-582e-4856-8bec-00097857b539-kube-api-access-6fzc8" (OuterVolumeSpecName: "kube-api-access-6fzc8") pod "2b0f6f45-582e-4856-8bec-00097857b539" (UID: "2b0f6f45-582e-4856-8bec-00097857b539"). InnerVolumeSpecName "kube-api-access-6fzc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.544653 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvsw7\" (UniqueName: \"kubernetes.io/projected/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-kube-api-access-dvsw7\") pod \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.544841 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-operator-scripts\") pod \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\" (UID: \"f5f5a642-e071-4830-b31e-7f0e8e7b73ef\") " Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.545760 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b0f6f45-582e-4856-8bec-00097857b539-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.545784 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzc8\" (UniqueName: \"kubernetes.io/projected/2b0f6f45-582e-4856-8bec-00097857b539-kube-api-access-6fzc8\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.552868 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-kube-api-access-dvsw7" (OuterVolumeSpecName: "kube-api-access-dvsw7") pod "f5f5a642-e071-4830-b31e-7f0e8e7b73ef" (UID: "f5f5a642-e071-4830-b31e-7f0e8e7b73ef"). InnerVolumeSpecName "kube-api-access-dvsw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.554638 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5f5a642-e071-4830-b31e-7f0e8e7b73ef" (UID: "f5f5a642-e071-4830-b31e-7f0e8e7b73ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.647555 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvsw7\" (UniqueName: \"kubernetes.io/projected/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-kube-api-access-dvsw7\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.647600 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5f5a642-e071-4830-b31e-7f0e8e7b73ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.773273 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dzt8d-config-cqm2v"] Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.839660 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-de40-account-create-update-6c8ts" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.841453 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vmcg" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.846740 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-de40-account-create-update-6c8ts" event={"ID":"f2ddced4-cb68-46ae-a929-a528b32c5ed5","Type":"ContainerDied","Data":"bb61b088d03c4ade245fc9a48d57ecacdcfcd42178323a1fb6a5c48e50056574"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.846778 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb61b088d03c4ade245fc9a48d57ecacdcfcd42178323a1fb6a5c48e50056574" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.846789 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vmcg" event={"ID":"84499c1d-3d44-4911-85b4-c1dafdb93b03","Type":"ContainerDied","Data":"2c9391ea6402584157426da937b3885f2db15c1b04a88f0384d0b0d67daa65fb"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.846799 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c9391ea6402584157426da937b3885f2db15c1b04a88f0384d0b0d67daa65fb" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.853541 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d5btv" event={"ID":"9cf99417-e04e-43cd-87d4-86b1bd8f33cc","Type":"ContainerDied","Data":"3c55e95e72addb9dd802e79074836b7764848db951499fbb02ba9dd0741c2f2a"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.853583 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c55e95e72addb9dd802e79074836b7764848db951499fbb02ba9dd0741c2f2a" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.853660 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d5btv" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.866953 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-48df-account-create-update-r4h6j" event={"ID":"a74265c6-4eb6-45a1-aa83-b0656eed2247","Type":"ContainerDied","Data":"f5e009dd9ec17ceafc748cff82599fea24821b76f5791f923abe1f974c921494"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.867009 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e009dd9ec17ceafc748cff82599fea24821b76f5791f923abe1f974c921494" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.867098 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-48df-account-create-update-r4h6j" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.869463 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qwxh2" event={"ID":"f5f5a642-e071-4830-b31e-7f0e8e7b73ef","Type":"ContainerDied","Data":"220823a1eef5996b247d3a01d972ed12b21c61255d5d455cf6b6c0c2fdd1d63a"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.869487 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220823a1eef5996b247d3a01d972ed12b21c61255d5d455cf6b6c0c2fdd1d63a" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.869524 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qwxh2" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.879169 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8a06-account-create-update-r2k25" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.879298 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8a06-account-create-update-r2k25" event={"ID":"7dab3e4c-800d-4848-9ff2-c5aed4c6a820","Type":"ContainerDied","Data":"dce8fce0a983a116e26796c48b548c971be60b5a5f8b9524ceaadf3db0dd0e89"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.879367 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dce8fce0a983a116e26796c48b548c971be60b5a5f8b9524ceaadf3db0dd0e89" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.884764 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-65b7-account-create-update-g5twz" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.884870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-65b7-account-create-update-g5twz" event={"ID":"2b0f6f45-582e-4856-8bec-00097857b539","Type":"ContainerDied","Data":"01192659a69e9bfa3f28a295c46562f957aee248047cb9ef5de042f460639535"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.884919 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01192659a69e9bfa3f28a295c46562f957aee248047cb9ef5de042f460639535" Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.886674 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dzt8d-config-cqm2v" event={"ID":"fdbba8eb-280b-495a-9dee-bd6cafb74598","Type":"ContainerStarted","Data":"05c0c7111a5d6e934f0209fcbce14392fd52c7f8a7af03a4fb92692f699aee7e"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.888969 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"763de9cf-5d74-4977-b6d5-53430185b17b","Type":"ContainerStarted","Data":"6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a"} Nov 28 21:10:04 crc kubenswrapper[4957]: I1128 21:10:04.920638 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.398955992 podStartE2EDuration="4.920619134s" podCreationTimestamp="2025-11-28 21:10:00 +0000 UTC" firstStartedPulling="2025-11-28 21:10:01.610947304 +0000 UTC m=+1241.079595213" lastFinishedPulling="2025-11-28 21:10:04.132610446 +0000 UTC m=+1243.601258355" observedRunningTime="2025-11-28 21:10:04.914581285 +0000 UTC m=+1244.383229194" watchObservedRunningTime="2025-11-28 21:10:04.920619134 +0000 UTC m=+1244.389267043" Nov 28 21:10:05 crc kubenswrapper[4957]: I1128 21:10:05.770723 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:05 crc kubenswrapper[4957]: I1128 21:10:05.774045 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:05 crc kubenswrapper[4957]: I1128 21:10:05.900189 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:07 crc kubenswrapper[4957]: I1128 21:10:07.403139 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:10:07 crc kubenswrapper[4957]: I1128 21:10:07.412982 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5\") " pod="openstack/swift-storage-0" Nov 28 21:10:07 crc kubenswrapper[4957]: I1128 21:10:07.710837 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 21:10:07 crc kubenswrapper[4957]: I1128 21:10:07.928661 4957 generic.go:334] "Generic (PLEG): container finished" podID="fdbba8eb-280b-495a-9dee-bd6cafb74598" containerID="9e2588fac43e45deff7a6705c72ef3d1d36222c2bd9883a02b68befb0275c10e" exitCode=0 Nov 28 21:10:07 crc kubenswrapper[4957]: I1128 21:10:07.928930 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dzt8d-config-cqm2v" event={"ID":"fdbba8eb-280b-495a-9dee-bd6cafb74598","Type":"ContainerDied","Data":"9e2588fac43e45deff7a6705c72ef3d1d36222c2bd9883a02b68befb0275c10e"} Nov 28 21:10:07 crc kubenswrapper[4957]: I1128 21:10:07.938102 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dzt8d" Nov 28 21:10:08 crc kubenswrapper[4957]: I1128 21:10:08.919333 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:10:08 crc kubenswrapper[4957]: I1128 21:10:08.919991 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="thanos-sidecar" containerID="cri-o://9d268751f5e2acd92321ba75b64bbeb8f207348ba78549b9aeffdafcc28e4c8c" gracePeriod=600 Nov 28 21:10:08 crc kubenswrapper[4957]: I1128 21:10:08.920103 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="config-reloader" containerID="cri-o://a980127708b819d7861bca333b7a086aa7d4d44fbe558acb2dc20379bb5ffe60" gracePeriod=600 Nov 28 21:10:08 crc kubenswrapper[4957]: I1128 21:10:08.920156 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="prometheus" containerID="cri-o://9009ab1d771da34161caf020be8928d10b0cbd46cb404d4e2fd1a19b0dcf018a" gracePeriod=600 Nov 28 21:10:08 crc kubenswrapper[4957]: I1128 21:10:08.992472 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:10:08 crc kubenswrapper[4957]: I1128 21:10:08.992521 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:10:09 crc kubenswrapper[4957]: I1128 21:10:09.955280 4957 generic.go:334] "Generic (PLEG): container finished" podID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerID="9d268751f5e2acd92321ba75b64bbeb8f207348ba78549b9aeffdafcc28e4c8c" exitCode=0 Nov 28 21:10:09 crc kubenswrapper[4957]: I1128 21:10:09.955309 4957 generic.go:334] "Generic (PLEG): container finished" podID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerID="a980127708b819d7861bca333b7a086aa7d4d44fbe558acb2dc20379bb5ffe60" exitCode=0 Nov 28 21:10:09 crc kubenswrapper[4957]: I1128 21:10:09.955316 4957 generic.go:334] "Generic (PLEG): container finished" podID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerID="9009ab1d771da34161caf020be8928d10b0cbd46cb404d4e2fd1a19b0dcf018a" exitCode=0 Nov 28 21:10:09 crc kubenswrapper[4957]: I1128 21:10:09.955338 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerDied","Data":"9d268751f5e2acd92321ba75b64bbeb8f207348ba78549b9aeffdafcc28e4c8c"} Nov 28 21:10:09 crc kubenswrapper[4957]: I1128 21:10:09.955364 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerDied","Data":"a980127708b819d7861bca333b7a086aa7d4d44fbe558acb2dc20379bb5ffe60"} Nov 28 21:10:09 crc kubenswrapper[4957]: I1128 21:10:09.955374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerDied","Data":"9009ab1d771da34161caf020be8928d10b0cbd46cb404d4e2fd1a19b0dcf018a"} Nov 28 21:10:10 crc kubenswrapper[4957]: I1128 21:10:10.771306 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.135:9090/-/ready\": dial tcp 10.217.0.135:9090: connect: connection refused" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.761105 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.789696 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-additional-scripts\") pod \"fdbba8eb-280b-495a-9dee-bd6cafb74598\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.789887 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run-ovn\") pod \"fdbba8eb-280b-495a-9dee-bd6cafb74598\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.789926 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run\") pod \"fdbba8eb-280b-495a-9dee-bd6cafb74598\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.789961 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlf5g\" (UniqueName: \"kubernetes.io/projected/fdbba8eb-280b-495a-9dee-bd6cafb74598-kube-api-access-xlf5g\") pod \"fdbba8eb-280b-495a-9dee-bd6cafb74598\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.790022 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-scripts\") pod \"fdbba8eb-280b-495a-9dee-bd6cafb74598\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.790040 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-log-ovn\") pod \"fdbba8eb-280b-495a-9dee-bd6cafb74598\" (UID: \"fdbba8eb-280b-495a-9dee-bd6cafb74598\") " Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.790057 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run" (OuterVolumeSpecName: "var-run") pod "fdbba8eb-280b-495a-9dee-bd6cafb74598" (UID: "fdbba8eb-280b-495a-9dee-bd6cafb74598"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.790481 4957 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.790515 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fdbba8eb-280b-495a-9dee-bd6cafb74598" (UID: "fdbba8eb-280b-495a-9dee-bd6cafb74598"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.791139 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fdbba8eb-280b-495a-9dee-bd6cafb74598" (UID: "fdbba8eb-280b-495a-9dee-bd6cafb74598"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.791353 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-scripts" (OuterVolumeSpecName: "scripts") pod "fdbba8eb-280b-495a-9dee-bd6cafb74598" (UID: "fdbba8eb-280b-495a-9dee-bd6cafb74598"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.791662 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fdbba8eb-280b-495a-9dee-bd6cafb74598" (UID: "fdbba8eb-280b-495a-9dee-bd6cafb74598"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.809994 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbba8eb-280b-495a-9dee-bd6cafb74598-kube-api-access-xlf5g" (OuterVolumeSpecName: "kube-api-access-xlf5g") pod "fdbba8eb-280b-495a-9dee-bd6cafb74598" (UID: "fdbba8eb-280b-495a-9dee-bd6cafb74598"). InnerVolumeSpecName "kube-api-access-xlf5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.891476 4957 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.891517 4957 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.891536 4957 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdbba8eb-280b-495a-9dee-bd6cafb74598-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.891551 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlf5g\" (UniqueName: \"kubernetes.io/projected/fdbba8eb-280b-495a-9dee-bd6cafb74598-kube-api-access-xlf5g\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:14 crc kubenswrapper[4957]: I1128 21:10:14.891565 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdbba8eb-280b-495a-9dee-bd6cafb74598-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:15 crc kubenswrapper[4957]: I1128 21:10:15.028111 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dzt8d-config-cqm2v" event={"ID":"fdbba8eb-280b-495a-9dee-bd6cafb74598","Type":"ContainerDied","Data":"05c0c7111a5d6e934f0209fcbce14392fd52c7f8a7af03a4fb92692f699aee7e"} Nov 28 21:10:15 crc kubenswrapper[4957]: I1128 21:10:15.028151 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05c0c7111a5d6e934f0209fcbce14392fd52c7f8a7af03a4fb92692f699aee7e" Nov 28 21:10:15 crc kubenswrapper[4957]: I1128 21:10:15.028200 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dzt8d-config-cqm2v" Nov 28 21:10:15 crc kubenswrapper[4957]: I1128 21:10:15.773881 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.135:9090/-/ready\": dial tcp 10.217.0.135:9090: connect: connection refused" Nov 28 21:10:15 crc kubenswrapper[4957]: I1128 21:10:15.893246 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dzt8d-config-cqm2v"] Nov 28 21:10:15 crc kubenswrapper[4957]: I1128 21:10:15.903689 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dzt8d-config-cqm2v"] Nov 28 21:10:16 crc kubenswrapper[4957]: I1128 21:10:16.845644 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdbba8eb-280b-495a-9dee-bd6cafb74598" path="/var/lib/kubelet/pods/fdbba8eb-280b-495a-9dee-bd6cafb74598/volumes" Nov 28 21:10:17 crc kubenswrapper[4957]: E1128 21:10:17.342095 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 28 21:10:17 crc kubenswrapper[4957]: E1128 21:10:17.342287 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mclzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-rb757_openstack(8196ca60-081d-4a36-acaf-d7e019bf2b12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:10:17 crc kubenswrapper[4957]: E1128 21:10:17.344227 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-rb757" podUID="8196ca60-081d-4a36-acaf-d7e019bf2b12" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.728830 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.759711 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config-out\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.759880 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.759966 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0cf3b067-6d8a-4d74-8c8c-285536f779e9-prometheus-metric-storage-rulefiles-0\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.760079 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tss9v\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-kube-api-access-tss9v\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.760160 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.760239 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-web-config\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.760261 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-tls-assets\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.760343 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-thanos-prometheus-http-client-file\") pod \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\" (UID: \"0cf3b067-6d8a-4d74-8c8c-285536f779e9\") " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.769700 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config-out" (OuterVolumeSpecName: "config-out") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.774575 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf3b067-6d8a-4d74-8c8c-285536f779e9-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.776424 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config" (OuterVolumeSpecName: "config") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.779042 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-kube-api-access-tss9v" (OuterVolumeSpecName: "kube-api-access-tss9v") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "kube-api-access-tss9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.780761 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.782172 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.784391 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.813534 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-web-config" (OuterVolumeSpecName: "web-config") pod "0cf3b067-6d8a-4d74-8c8c-285536f779e9" (UID: "0cf3b067-6d8a-4d74-8c8c-285536f779e9"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864413 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864886 4957 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0cf3b067-6d8a-4d74-8c8c-285536f779e9-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864914 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tss9v\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-kube-api-access-tss9v\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864929 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864939 4957 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0cf3b067-6d8a-4d74-8c8c-285536f779e9-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864950 4957 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-web-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864961 4957 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0cf3b067-6d8a-4d74-8c8c-285536f779e9-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.864971 4957 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0cf3b067-6d8a-4d74-8c8c-285536f779e9-config-out\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.895625 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 21:10:17 crc kubenswrapper[4957]: I1128 21:10:17.966937 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.038320 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 21:10:18 crc kubenswrapper[4957]: W1128 21:10:18.040552 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccb21c56_7bea_44f0_8dc6_b50a18a7cbd5.slice/crio-9cb56407496db4df6f253df4a3ee4730448c76930df53f9c0f0133fc5d2a4a05 WatchSource:0}: Error finding container 9cb56407496db4df6f253df4a3ee4730448c76930df53f9c0f0133fc5d2a4a05: Status 404 returned error can't find the container with id 9cb56407496db4df6f253df4a3ee4730448c76930df53f9c0f0133fc5d2a4a05 Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.077501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0cf3b067-6d8a-4d74-8c8c-285536f779e9","Type":"ContainerDied","Data":"0551abe995c9d76e9fc5b49d6fbfa874d004b6d19af95d73f8d61ebc0b4faf8d"} Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.077547 4957 scope.go:117] "RemoveContainer" containerID="9d268751f5e2acd92321ba75b64bbeb8f207348ba78549b9aeffdafcc28e4c8c" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.077661 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.081343 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"9cb56407496db4df6f253df4a3ee4730448c76930df53f9c0f0133fc5d2a4a05"} Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.082977 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-rb757" podUID="8196ca60-081d-4a36-acaf-d7e019bf2b12" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.101417 4957 scope.go:117] "RemoveContainer" containerID="a980127708b819d7861bca333b7a086aa7d4d44fbe558acb2dc20379bb5ffe60" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.129701 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.135736 4957 scope.go:117] "RemoveContainer" containerID="9009ab1d771da34161caf020be8928d10b0cbd46cb404d4e2fd1a19b0dcf018a" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.149917 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.159961 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160420 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="prometheus" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160436 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="prometheus" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160448 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b34c44-6e04-4e3f-8147-9616e4003021" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160455 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b34c44-6e04-4e3f-8147-9616e4003021" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160465 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="config-reloader" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160471 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="config-reloader" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160491 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ddced4-cb68-46ae-a929-a528b32c5ed5" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160508 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ddced4-cb68-46ae-a929-a528b32c5ed5" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160520 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf99417-e04e-43cd-87d4-86b1bd8f33cc" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160526 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf99417-e04e-43cd-87d4-86b1bd8f33cc" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160537 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0f6f45-582e-4856-8bec-00097857b539" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160543 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0f6f45-582e-4856-8bec-00097857b539" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160561 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74265c6-4eb6-45a1-aa83-b0656eed2247" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160567 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74265c6-4eb6-45a1-aa83-b0656eed2247" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160577 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dab3e4c-800d-4848-9ff2-c5aed4c6a820" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160583 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dab3e4c-800d-4848-9ff2-c5aed4c6a820" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160596 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5f5a642-e071-4830-b31e-7f0e8e7b73ef" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160602 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5f5a642-e071-4830-b31e-7f0e8e7b73ef" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160612 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84499c1d-3d44-4911-85b4-c1dafdb93b03" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160618 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="84499c1d-3d44-4911-85b4-c1dafdb93b03" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160628 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbba8eb-280b-495a-9dee-bd6cafb74598" containerName="ovn-config" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160635 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbba8eb-280b-495a-9dee-bd6cafb74598" containerName="ovn-config" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160645 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="thanos-sidecar" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160651 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="thanos-sidecar" Nov 28 21:10:18 crc kubenswrapper[4957]: E1128 21:10:18.160668 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="init-config-reloader" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160675 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="init-config-reloader" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160855 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbba8eb-280b-495a-9dee-bd6cafb74598" containerName="ovn-config" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160866 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74265c6-4eb6-45a1-aa83-b0656eed2247" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160879 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf99417-e04e-43cd-87d4-86b1bd8f33cc" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160891 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ddced4-cb68-46ae-a929-a528b32c5ed5" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160897 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dab3e4c-800d-4848-9ff2-c5aed4c6a820" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160910 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0f6f45-582e-4856-8bec-00097857b539" containerName="mariadb-account-create-update" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160919 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="config-reloader" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160930 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="84499c1d-3d44-4911-85b4-c1dafdb93b03" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160943 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="thanos-sidecar" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160950 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b34c44-6e04-4e3f-8147-9616e4003021" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160959 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" containerName="prometheus" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.160969 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f5a642-e071-4830-b31e-7f0e8e7b73ef" containerName="mariadb-database-create" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.162825 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.164671 4957 scope.go:117] "RemoveContainer" containerID="ccab034b7fe1b003bb22b8809867467c6fa2bdf31a6b078700099887371b30d7" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.166040 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vg8pc" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.166557 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.166675 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.166750 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.166759 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.166963 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.169596 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.171890 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273360 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273428 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc4dd4fb-4706-4212-bfc5-84029b567248-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273457 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273491 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273541 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc4dd4fb-4706-4212-bfc5-84029b567248-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273651 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc4dd4fb-4706-4212-bfc5-84029b567248-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273766 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.273791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnzk\" (UniqueName: \"kubernetes.io/projected/fc4dd4fb-4706-4212-bfc5-84029b567248-kube-api-access-pgnzk\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.274012 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.274231 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-config\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.274319 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.375890 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.375967 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc4dd4fb-4706-4212-bfc5-84029b567248-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.375988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc4dd4fb-4706-4212-bfc5-84029b567248-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc4dd4fb-4706-4212-bfc5-84029b567248-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376126 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgnzk\" (UniqueName: \"kubernetes.io/projected/fc4dd4fb-4706-4212-bfc5-84029b567248-kube-api-access-pgnzk\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376176 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.376189 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.377276 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-config\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.377348 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.377647 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc4dd4fb-4706-4212-bfc5-84029b567248-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.386177 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc4dd4fb-4706-4212-bfc5-84029b567248-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.386337 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.386341 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-config\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.386970 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.387021 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.387223 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc4dd4fb-4706-4212-bfc5-84029b567248-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.387626 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.390644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fc4dd4fb-4706-4212-bfc5-84029b567248-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.392658 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgnzk\" (UniqueName: \"kubernetes.io/projected/fc4dd4fb-4706-4212-bfc5-84029b567248-kube-api-access-pgnzk\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.414852 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"prometheus-metric-storage-0\" (UID: \"fc4dd4fb-4706-4212-bfc5-84029b567248\") " pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.498229 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.839579 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf3b067-6d8a-4d74-8c8c-285536f779e9" path="/var/lib/kubelet/pods/0cf3b067-6d8a-4d74-8c8c-285536f779e9/volumes" Nov 28 21:10:18 crc kubenswrapper[4957]: I1128 21:10:18.992205 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 21:10:19 crc kubenswrapper[4957]: I1128 21:10:19.119983 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fc4dd4fb-4706-4212-bfc5-84029b567248","Type":"ContainerStarted","Data":"e91a486f8adff032e47bb195c543015a9b183add82746b1d96d8afde62f5ba4b"} Nov 28 21:10:19 crc kubenswrapper[4957]: I1128 21:10:19.121609 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6jkj" event={"ID":"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f","Type":"ContainerStarted","Data":"dc130955cb6ca5d8b6138209e058c659eb5184f2dffb48e7a89e2c5adc29cad6"} Nov 28 21:10:19 crc kubenswrapper[4957]: I1128 21:10:19.157187 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-l6jkj" podStartSLOduration=3.677200433 podStartE2EDuration="27.157165482s" podCreationTimestamp="2025-11-28 21:09:52 +0000 UTC" firstStartedPulling="2025-11-28 21:09:53.959569756 +0000 UTC m=+1233.428217665" lastFinishedPulling="2025-11-28 21:10:17.439534815 +0000 UTC m=+1256.908182714" observedRunningTime="2025-11-28 21:10:19.14245898 +0000 UTC m=+1258.611106889" watchObservedRunningTime="2025-11-28 21:10:19.157165482 +0000 UTC m=+1258.625813391" Nov 28 21:10:22 crc kubenswrapper[4957]: I1128 21:10:22.153773 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"fda2f0cb996fdb8d15c9a92b3e63f76c8ebd4bb47efdc43e0bdeb8859dad683e"} Nov 28 21:10:22 crc kubenswrapper[4957]: I1128 21:10:22.154309 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"8bd8752c3d5c25f2e469d8d1729658037cb1fb2c6b495b035987a202f0ec5bc9"} Nov 28 21:10:22 crc kubenswrapper[4957]: I1128 21:10:22.156833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fc4dd4fb-4706-4212-bfc5-84029b567248","Type":"ContainerStarted","Data":"ef397000ab5665c9c5efa330b1b903694e208c8912a9331ca15cb109d64f8fa9"} Nov 28 21:10:23 crc kubenswrapper[4957]: I1128 21:10:23.180089 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"f70206364fc3db4ae14ac6aba243c59b4c30a216634c945112780e15926c56c5"} Nov 28 21:10:23 crc kubenswrapper[4957]: I1128 21:10:23.180540 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"f44cfd8f0ad6b5d5c568514a1a63314a442fd4ef1b978bb4959a40053d991136"} Nov 28 21:10:24 crc kubenswrapper[4957]: I1128 21:10:24.193601 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"609624ed6420b081db96dad5a1b56d08b6cc894559e92de0df9e7d538e280dfb"} Nov 28 21:10:25 crc kubenswrapper[4957]: I1128 21:10:25.207987 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"bdb044ff4ea11b85effba014c591abe38235033df00e0919090baf47edf37246"} Nov 28 21:10:25 crc kubenswrapper[4957]: I1128 21:10:25.208817 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"0f0d9acc0c4d17b615e4ae30d430f81b2b41e644abeaa34ef159b5b0f07ebe27"} Nov 28 21:10:25 crc kubenswrapper[4957]: I1128 21:10:25.208835 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"9d1e6eadc7ff4383e5937881c78f0cc9ecf92ca0e611b23ab4f655062bed6656"} Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.234582 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"9150e8ad07973277035a8f58ec9c36c6c22b7781eba90d94fb2403d53517af57"} Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.235094 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"a8cad1c8393b186b3197d560916b1f2d1831184aceffcf7d15ba0d8ee0cd077e"} Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.235111 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"5cdadfbf682467b4e0e6d211fc79a5c84f98eeda7a3738e654db4a7614d99538"} Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.239862 4957 generic.go:334] "Generic (PLEG): container finished" podID="fc4dd4fb-4706-4212-bfc5-84029b567248" containerID="ef397000ab5665c9c5efa330b1b903694e208c8912a9331ca15cb109d64f8fa9" exitCode=0 Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.239921 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fc4dd4fb-4706-4212-bfc5-84029b567248","Type":"ContainerDied","Data":"ef397000ab5665c9c5efa330b1b903694e208c8912a9331ca15cb109d64f8fa9"} Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.242421 4957 generic.go:334] "Generic (PLEG): container finished" podID="a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" containerID="dc130955cb6ca5d8b6138209e058c659eb5184f2dffb48e7a89e2c5adc29cad6" exitCode=0 Nov 28 21:10:27 crc kubenswrapper[4957]: I1128 21:10:27.242458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6jkj" event={"ID":"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f","Type":"ContainerDied","Data":"dc130955cb6ca5d8b6138209e058c659eb5184f2dffb48e7a89e2c5adc29cad6"} Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.258093 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"9cb43487e050953f66006fca190519c362f01bfc8dac6b786dec282d2d45cc21"} Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.259294 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"e20543202ef96d41f62ac9ba5d53df519e27a86ce14e75b0dd8fd373152bf768"} Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.259360 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"e34269cffb65de6adab4cb330ecc2022f9967baf428c6be5084c3e33c7aee785"} Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.259454 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5","Type":"ContainerStarted","Data":"a34a7fce677350f1faa76bbc37fc090493640e8d50291d0a2541e770e37e8fa5"} Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.260605 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fc4dd4fb-4706-4212-bfc5-84029b567248","Type":"ContainerStarted","Data":"dbd6be6f6c5e6b50fa05809a18158daf731b2364583f6d6566635432bb4c568a"} Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.307654 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.962937069 podStartE2EDuration="54.307634125s" podCreationTimestamp="2025-11-28 21:09:34 +0000 UTC" firstStartedPulling="2025-11-28 21:10:18.043894279 +0000 UTC m=+1257.512542188" lastFinishedPulling="2025-11-28 21:10:26.388591345 +0000 UTC m=+1265.857239244" observedRunningTime="2025-11-28 21:10:28.299527826 +0000 UTC m=+1267.768175755" watchObservedRunningTime="2025-11-28 21:10:28.307634125 +0000 UTC m=+1267.776282034" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.582362 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l64js"] Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.584232 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.588298 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.618114 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l64js"] Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.690659 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.690740 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-config\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.690861 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.690915 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.690960 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-svc\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.690999 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdc4w\" (UniqueName: \"kubernetes.io/projected/de6bc06c-3cb0-4b23-9949-b3d770b55a48-kube-api-access-cdc4w\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.763153 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6jkj" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.794498 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.794593 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.794626 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-svc\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.794679 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdc4w\" (UniqueName: \"kubernetes.io/projected/de6bc06c-3cb0-4b23-9949-b3d770b55a48-kube-api-access-cdc4w\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.794747 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.794789 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-config\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.796393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-config\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.796418 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.796523 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-svc\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.796648 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.797194 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.821008 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdc4w\" (UniqueName: \"kubernetes.io/projected/de6bc06c-3cb0-4b23-9949-b3d770b55a48-kube-api-access-cdc4w\") pod \"dnsmasq-dns-764c5664d7-l64js\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.896634 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-config-data\") pod \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.896679 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-combined-ca-bundle\") pod \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.896698 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gbnn\" (UniqueName: \"kubernetes.io/projected/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-kube-api-access-5gbnn\") pod \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.897870 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-db-sync-config-data\") pod \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\" (UID: \"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f\") " Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.901402 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-kube-api-access-5gbnn" (OuterVolumeSpecName: "kube-api-access-5gbnn") pod "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" (UID: "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f"). InnerVolumeSpecName "kube-api-access-5gbnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.902551 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" (UID: "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.920888 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" (UID: "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.931084 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:28 crc kubenswrapper[4957]: I1128 21:10:28.948195 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-config-data" (OuterVolumeSpecName: "config-data") pod "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" (UID: "a8aef833-7bf5-4ae4-9fc9-62e1bf24871f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.001439 4957 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.001491 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.001504 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.001516 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gbnn\" (UniqueName: \"kubernetes.io/projected/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f-kube-api-access-5gbnn\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.271568 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-l6jkj" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.271676 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-l6jkj" event={"ID":"a8aef833-7bf5-4ae4-9fc9-62e1bf24871f","Type":"ContainerDied","Data":"d6636d722fbc31d73fdc61a8421586aa08cfce7bfb23ad3e618ab3b2aecb254c"} Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.272125 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6636d722fbc31d73fdc61a8421586aa08cfce7bfb23ad3e618ab3b2aecb254c" Nov 28 21:10:29 crc kubenswrapper[4957]: W1128 21:10:29.387135 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde6bc06c_3cb0_4b23_9949_b3d770b55a48.slice/crio-58f7f523753d394f5e141678e6092ef917acadce7d53ab1100931c8d2bcd7960 WatchSource:0}: Error finding container 58f7f523753d394f5e141678e6092ef917acadce7d53ab1100931c8d2bcd7960: Status 404 returned error can't find the container with id 58f7f523753d394f5e141678e6092ef917acadce7d53ab1100931c8d2bcd7960 Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.387735 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l64js"] Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.670304 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l64js"] Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.722437 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8khxm"] Nov 28 21:10:29 crc kubenswrapper[4957]: E1128 21:10:29.722924 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" containerName="glance-db-sync" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.722942 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" containerName="glance-db-sync" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.723180 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" containerName="glance-db-sync" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.724312 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.732186 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8khxm"] Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.827398 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.827490 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.827586 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.827612 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.827672 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9bx\" (UniqueName: \"kubernetes.io/projected/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-kube-api-access-6t9bx\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.827740 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-config\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.929351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-config\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.929447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.929520 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.929620 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.929647 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.929710 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9bx\" (UniqueName: \"kubernetes.io/projected/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-kube-api-access-6t9bx\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.930636 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-config\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.930644 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.931186 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.931295 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.931419 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:29 crc kubenswrapper[4957]: I1128 21:10:29.952781 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9bx\" (UniqueName: \"kubernetes.io/projected/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-kube-api-access-6t9bx\") pod \"dnsmasq-dns-74f6bcbc87-8khxm\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:30 crc kubenswrapper[4957]: I1128 21:10:30.239641 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:30 crc kubenswrapper[4957]: I1128 21:10:30.288722 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l64js" event={"ID":"de6bc06c-3cb0-4b23-9949-b3d770b55a48","Type":"ContainerStarted","Data":"58f7f523753d394f5e141678e6092ef917acadce7d53ab1100931c8d2bcd7960"} Nov 28 21:10:30 crc kubenswrapper[4957]: I1128 21:10:30.715311 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8khxm"] Nov 28 21:10:30 crc kubenswrapper[4957]: I1128 21:10:30.824975 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.298833 4957 generic.go:334] "Generic (PLEG): container finished" podID="de6bc06c-3cb0-4b23-9949-b3d770b55a48" containerID="5cb75d0b4dbc183bb62f240b857f549d9ea223651aa5b85fa66a55c8bb25b8b6" exitCode=0 Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.299034 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l64js" event={"ID":"de6bc06c-3cb0-4b23-9949-b3d770b55a48","Type":"ContainerDied","Data":"5cb75d0b4dbc183bb62f240b857f549d9ea223651aa5b85fa66a55c8bb25b8b6"} Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.301974 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fc4dd4fb-4706-4212-bfc5-84029b567248","Type":"ContainerStarted","Data":"903744c3fc27d8a918382348c0868657e3aadedbd57c7004d3507aa0983138e9"} Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.303721 4957 generic.go:334] "Generic (PLEG): container finished" podID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerID="51bd9b495b29bbbf815063ce470d15d7a13ac328ed2c0c266ae4c3bf97eb1abb" exitCode=0 Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.303759 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" event={"ID":"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9","Type":"ContainerDied","Data":"51bd9b495b29bbbf815063ce470d15d7a13ac328ed2c0c266ae4c3bf97eb1abb"} Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.303783 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" event={"ID":"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9","Type":"ContainerStarted","Data":"24588ce76901811cc0c88d32ff7c6a9135e7b888b1f44a0751d70899a681dc4f"} Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.646726 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.794865 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-swift-storage-0\") pod \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.794994 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdc4w\" (UniqueName: \"kubernetes.io/projected/de6bc06c-3cb0-4b23-9949-b3d770b55a48-kube-api-access-cdc4w\") pod \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.795096 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-config\") pod \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.795154 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-sb\") pod \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.795487 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-nb\") pod \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.795533 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-svc\") pod \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\" (UID: \"de6bc06c-3cb0-4b23-9949-b3d770b55a48\") " Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.801575 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6bc06c-3cb0-4b23-9949-b3d770b55a48-kube-api-access-cdc4w" (OuterVolumeSpecName: "kube-api-access-cdc4w") pod "de6bc06c-3cb0-4b23-9949-b3d770b55a48" (UID: "de6bc06c-3cb0-4b23-9949-b3d770b55a48"). InnerVolumeSpecName "kube-api-access-cdc4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.823617 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de6bc06c-3cb0-4b23-9949-b3d770b55a48" (UID: "de6bc06c-3cb0-4b23-9949-b3d770b55a48"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.843906 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-config" (OuterVolumeSpecName: "config") pod "de6bc06c-3cb0-4b23-9949-b3d770b55a48" (UID: "de6bc06c-3cb0-4b23-9949-b3d770b55a48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.845485 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de6bc06c-3cb0-4b23-9949-b3d770b55a48" (UID: "de6bc06c-3cb0-4b23-9949-b3d770b55a48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.867998 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de6bc06c-3cb0-4b23-9949-b3d770b55a48" (UID: "de6bc06c-3cb0-4b23-9949-b3d770b55a48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.870338 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de6bc06c-3cb0-4b23-9949-b3d770b55a48" (UID: "de6bc06c-3cb0-4b23-9949-b3d770b55a48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.903013 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.903050 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.903064 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.903078 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdc4w\" (UniqueName: \"kubernetes.io/projected/de6bc06c-3cb0-4b23-9949-b3d770b55a48-kube-api-access-cdc4w\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.903090 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:31 crc kubenswrapper[4957]: I1128 21:10:31.903109 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de6bc06c-3cb0-4b23-9949-b3d770b55a48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.317374 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-l64js" event={"ID":"de6bc06c-3cb0-4b23-9949-b3d770b55a48","Type":"ContainerDied","Data":"58f7f523753d394f5e141678e6092ef917acadce7d53ab1100931c8d2bcd7960"} Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.317457 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-l64js" Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.317614 4957 scope.go:117] "RemoveContainer" containerID="5cb75d0b4dbc183bb62f240b857f549d9ea223651aa5b85fa66a55c8bb25b8b6" Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.321377 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb757" event={"ID":"8196ca60-081d-4a36-acaf-d7e019bf2b12","Type":"ContainerStarted","Data":"6dc4f482b8a4e35bee03479d90c9119635eec0060008b7291021d36f1805f46b"} Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.330276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fc4dd4fb-4706-4212-bfc5-84029b567248","Type":"ContainerStarted","Data":"451bf916bf616a4f45f02a2268261055fd43f3dc6c96211d319a05acd7fdd6ab"} Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.348417 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rb757" podStartSLOduration=2.413881988 podStartE2EDuration="33.348399791s" podCreationTimestamp="2025-11-28 21:09:59 +0000 UTC" firstStartedPulling="2025-11-28 21:10:01.03819046 +0000 UTC m=+1240.506838359" lastFinishedPulling="2025-11-28 21:10:31.972708253 +0000 UTC m=+1271.441356162" observedRunningTime="2025-11-28 21:10:32.338695432 +0000 UTC m=+1271.807343331" watchObservedRunningTime="2025-11-28 21:10:32.348399791 +0000 UTC m=+1271.817047690" Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.395356 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l64js"] Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.402556 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-l64js"] Nov 28 21:10:32 crc kubenswrapper[4957]: I1128 21:10:32.412683 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.412662775 podStartE2EDuration="14.412662775s" podCreationTimestamp="2025-11-28 21:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:32.408825661 +0000 UTC m=+1271.877473570" watchObservedRunningTime="2025-11-28 21:10:32.412662775 +0000 UTC m=+1271.881310694" Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:32.841390 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6bc06c-3cb0-4b23-9949-b3d770b55a48" path="/var/lib/kubelet/pods/de6bc06c-3cb0-4b23-9949-b3d770b55a48/volumes" Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:33.340564 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" event={"ID":"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9","Type":"ContainerStarted","Data":"e22e24c80c15a0e5f16d06f39d369ddb7c1afad34d7ca45c6a5be625f907646e"} Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:33.342112 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:33.363889 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" podStartSLOduration=4.363869985 podStartE2EDuration="4.363869985s" podCreationTimestamp="2025-11-28 21:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:33.360888931 +0000 UTC m=+1272.829536840" watchObservedRunningTime="2025-11-28 21:10:33.363869985 +0000 UTC m=+1272.832517894" Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:33.500119 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:33.500224 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:33 crc kubenswrapper[4957]: I1128 21:10:33.504966 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:34 crc kubenswrapper[4957]: I1128 21:10:34.357057 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 21:10:36 crc kubenswrapper[4957]: I1128 21:10:36.372390 4957 generic.go:334] "Generic (PLEG): container finished" podID="8196ca60-081d-4a36-acaf-d7e019bf2b12" containerID="6dc4f482b8a4e35bee03479d90c9119635eec0060008b7291021d36f1805f46b" exitCode=0 Nov 28 21:10:36 crc kubenswrapper[4957]: I1128 21:10:36.372480 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb757" event={"ID":"8196ca60-081d-4a36-acaf-d7e019bf2b12","Type":"ContainerDied","Data":"6dc4f482b8a4e35bee03479d90c9119635eec0060008b7291021d36f1805f46b"} Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.796028 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb757" Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.929827 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-config-data\") pod \"8196ca60-081d-4a36-acaf-d7e019bf2b12\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.930461 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-combined-ca-bundle\") pod \"8196ca60-081d-4a36-acaf-d7e019bf2b12\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.930570 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mclzt\" (UniqueName: \"kubernetes.io/projected/8196ca60-081d-4a36-acaf-d7e019bf2b12-kube-api-access-mclzt\") pod \"8196ca60-081d-4a36-acaf-d7e019bf2b12\" (UID: \"8196ca60-081d-4a36-acaf-d7e019bf2b12\") " Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.936985 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8196ca60-081d-4a36-acaf-d7e019bf2b12-kube-api-access-mclzt" (OuterVolumeSpecName: "kube-api-access-mclzt") pod "8196ca60-081d-4a36-acaf-d7e019bf2b12" (UID: "8196ca60-081d-4a36-acaf-d7e019bf2b12"). InnerVolumeSpecName "kube-api-access-mclzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.965384 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8196ca60-081d-4a36-acaf-d7e019bf2b12" (UID: "8196ca60-081d-4a36-acaf-d7e019bf2b12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:37 crc kubenswrapper[4957]: I1128 21:10:37.987168 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-config-data" (OuterVolumeSpecName: "config-data") pod "8196ca60-081d-4a36-acaf-d7e019bf2b12" (UID: "8196ca60-081d-4a36-acaf-d7e019bf2b12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.034307 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.034338 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mclzt\" (UniqueName: \"kubernetes.io/projected/8196ca60-081d-4a36-acaf-d7e019bf2b12-kube-api-access-mclzt\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.034349 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8196ca60-081d-4a36-acaf-d7e019bf2b12-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.392895 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb757" event={"ID":"8196ca60-081d-4a36-acaf-d7e019bf2b12","Type":"ContainerDied","Data":"c0a6e5f3096ae545bb4dc403c92ee12c84245436106e855c39973d9508171309"} Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.392940 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0a6e5f3096ae545bb4dc403c92ee12c84245436106e855c39973d9508171309" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.393198 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb757" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.696007 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8khxm"] Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.696311 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerName="dnsmasq-dns" containerID="cri-o://e22e24c80c15a0e5f16d06f39d369ddb7c1afad34d7ca45c6a5be625f907646e" gracePeriod=10 Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.712004 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.739281 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cvtnr"] Nov 28 21:10:38 crc kubenswrapper[4957]: E1128 21:10:38.740274 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8196ca60-081d-4a36-acaf-d7e019bf2b12" containerName="keystone-db-sync" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.740302 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8196ca60-081d-4a36-acaf-d7e019bf2b12" containerName="keystone-db-sync" Nov 28 21:10:38 crc kubenswrapper[4957]: E1128 21:10:38.740341 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6bc06c-3cb0-4b23-9949-b3d770b55a48" containerName="init" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.740351 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6bc06c-3cb0-4b23-9949-b3d770b55a48" containerName="init" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.740880 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8196ca60-081d-4a36-acaf-d7e019bf2b12" containerName="keystone-db-sync" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.740950 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6bc06c-3cb0-4b23-9949-b3d770b55a48" containerName="init" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.742164 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.759841 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.760346 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.760587 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.761271 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.761485 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zhjp" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.799025 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cvtnr"] Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.862881 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-credential-keys\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.862939 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-combined-ca-bundle\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.863021 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-fernet-keys\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.863051 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45qxh\" (UniqueName: \"kubernetes.io/projected/edd8fcc2-cbe5-4677-95b2-009ed030745d-kube-api-access-45qxh\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.863089 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-config-data\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.863136 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-scripts\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.871455 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-t8ts5"] Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.895271 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-t8ts5"] Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.895384 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.914545 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-7skhk"] Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.922034 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.929523 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.938692 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-kbz6c" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.940896 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7skhk"] Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.966053 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-fernet-keys\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.966127 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45qxh\" (UniqueName: \"kubernetes.io/projected/edd8fcc2-cbe5-4677-95b2-009ed030745d-kube-api-access-45qxh\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.966235 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-config-data\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.966330 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-scripts\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.966386 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-credential-keys\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.966432 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-combined-ca-bundle\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.987856 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-fernet-keys\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.988052 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-config-data\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.994856 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-scripts\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.995432 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.995476 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:10:38 crc kubenswrapper[4957]: I1128 21:10:38.995514 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:38.997552 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa7dcf960732566934369f18786490e508e6fd20d84c21ca9c77aae13bfcc8d4"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:38.997634 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://aa7dcf960732566934369f18786490e508e6fd20d84c21ca9c77aae13bfcc8d4" gracePeriod=600 Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.015849 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-combined-ca-bundle\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.044894 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45qxh\" (UniqueName: \"kubernetes.io/projected/edd8fcc2-cbe5-4677-95b2-009ed030745d-kube-api-access-45qxh\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.052085 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-credential-keys\") pod \"keystone-bootstrap-cvtnr\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081009 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-config\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081140 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081227 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081439 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-config-data\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5h6k\" (UniqueName: \"kubernetes.io/projected/93d1f696-9f60-4f4e-817f-98ad8c3800a1-kube-api-access-r5h6k\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081798 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc8m\" (UniqueName: \"kubernetes.io/projected/8bcfaba7-030f-4415-b1c0-79820941039b-kube-api-access-rhc8m\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.081975 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-combined-ca-bundle\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.099667 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.155907 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tzbj8"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.157734 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.188839 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s4ndb" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.189106 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190250 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190287 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc8m\" (UniqueName: \"kubernetes.io/projected/8bcfaba7-030f-4415-b1c0-79820941039b-kube-api-access-rhc8m\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190360 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-combined-ca-bundle\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190392 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-config\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190406 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190426 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-config-data\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5h6k\" (UniqueName: \"kubernetes.io/projected/93d1f696-9f60-4f4e-817f-98ad8c3800a1-kube-api-access-r5h6k\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.190535 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.191787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.192338 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.192860 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-config\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.193225 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.193425 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.193550 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.213136 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-combined-ca-bundle\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.213953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-config-data\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.224768 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tzbj8"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.291154 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc8m\" (UniqueName: \"kubernetes.io/projected/8bcfaba7-030f-4415-b1c0-79820941039b-kube-api-access-rhc8m\") pod \"heat-db-sync-7skhk\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.300703 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-combined-ca-bundle\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.300879 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-kube-api-access-lc6nv\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.300941 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-config\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.334593 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5h6k\" (UniqueName: \"kubernetes.io/projected/93d1f696-9f60-4f4e-817f-98ad8c3800a1-kube-api-access-r5h6k\") pod \"dnsmasq-dns-847c4cc679-t8ts5\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.370402 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lm4b6"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.371858 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.383565 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m5nq9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.383828 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.384060 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.402463 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-combined-ca-bundle\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.402598 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-kube-api-access-lc6nv\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.402642 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-config\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.406949 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-t8ts5"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.407701 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.411757 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-config\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.416323 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-combined-ca-bundle\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.445361 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lm4b6"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.465494 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wnxk5"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.466866 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.475471 4957 generic.go:334] "Generic (PLEG): container finished" podID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerID="e22e24c80c15a0e5f16d06f39d369ddb7c1afad34d7ca45c6a5be625f907646e" exitCode=0 Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.476039 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" event={"ID":"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9","Type":"ContainerDied","Data":"e22e24c80c15a0e5f16d06f39d369ddb7c1afad34d7ca45c6a5be625f907646e"} Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.476739 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.476816 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-txcbx" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.490459 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7skhk" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.495409 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="aa7dcf960732566934369f18786490e508e6fd20d84c21ca9c77aae13bfcc8d4" exitCode=0 Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.495459 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"aa7dcf960732566934369f18786490e508e6fd20d84c21ca9c77aae13bfcc8d4"} Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.495495 4957 scope.go:117] "RemoveContainer" containerID="54c98ea802e0128e09dc3a1d110ba16d7362c23b697cdc49ba44b4359ff9c798" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.509631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w68g\" (UniqueName: \"kubernetes.io/projected/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-kube-api-access-5w68g\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.509783 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-combined-ca-bundle\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.509851 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-db-sync-config-data\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.509928 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-scripts\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.509999 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-config-data\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.510128 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-etc-machine-id\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.518561 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wnxk5"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.546225 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v9dz9"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.549202 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-kube-api-access-lc6nv\") pod \"neutron-db-sync-tzbj8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.559819 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.567397 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.567636 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.567761 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xx8qj" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.605785 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v9dz9"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612342 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-scripts\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612429 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-config-data\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612458 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-db-sync-config-data\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-etc-machine-id\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612663 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqsdb\" (UniqueName: \"kubernetes.io/projected/1717eaea-018a-4e9d-af82-ce7b3fb3868e-kube-api-access-dqsdb\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612690 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w68g\" (UniqueName: \"kubernetes.io/projected/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-kube-api-access-5w68g\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612753 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-combined-ca-bundle\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612797 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-db-sync-config-data\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612822 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-combined-ca-bundle\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.612976 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-etc-machine-id\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.619564 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ff8lc"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.621495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.630886 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-combined-ca-bundle\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.633943 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ff8lc"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.638420 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-scripts\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.640404 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w68g\" (UniqueName: \"kubernetes.io/projected/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-kube-api-access-5w68g\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.651924 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-db-sync-config-data\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.654436 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-config-data\") pod \"cinder-db-sync-lm4b6\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.671659 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.678136 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.682543 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.689090 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.694250 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.698088 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715508 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqsdb\" (UniqueName: \"kubernetes.io/projected/1717eaea-018a-4e9d-af82-ce7b3fb3868e-kube-api-access-dqsdb\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715637 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-combined-ca-bundle\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715661 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715733 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715775 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-combined-ca-bundle\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715872 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-scripts\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715917 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-db-sync-config-data\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715952 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-kube-api-access-8ppjz\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.715974 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-config\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.716000 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-config-data\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.716020 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9799j\" (UniqueName: \"kubernetes.io/projected/6a202627-5f8e-4fc1-a99f-741e57e7e973-kube-api-access-9799j\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.716045 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-logs\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.717488 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.742987 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-combined-ca-bundle\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.744910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqsdb\" (UniqueName: \"kubernetes.io/projected/1717eaea-018a-4e9d-af82-ce7b3fb3868e-kube-api-access-dqsdb\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.747028 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-db-sync-config-data\") pod \"barbican-db-sync-wnxk5\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.833934 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-combined-ca-bundle\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.833984 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834044 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-run-httpd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-log-httpd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834151 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834311 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834452 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-scripts\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834581 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-config-data\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834687 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-kube-api-access-8ppjz\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834711 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglkd\" (UniqueName: \"kubernetes.io/projected/f693abe9-5b02-4359-8522-bc89360df2b0-kube-api-access-mglkd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834750 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-config\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834798 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-scripts\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-config-data\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834880 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9799j\" (UniqueName: \"kubernetes.io/projected/6a202627-5f8e-4fc1-a99f-741e57e7e973-kube-api-access-9799j\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.834923 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-logs\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.835064 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.839064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.842110 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.842272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-config\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.848614 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-scripts\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.863371 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-kube-api-access-8ppjz\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.864539 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-combined-ca-bundle\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.870806 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.872360 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.874600 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.889947 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-config-data\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.894660 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-logs\") pod \"placement-db-sync-v9dz9\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.905692 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9799j\" (UniqueName: \"kubernetes.io/projected/6a202627-5f8e-4fc1-a99f-741e57e7e973-kube-api-access-9799j\") pod \"dnsmasq-dns-785d8bcb8c-ff8lc\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.911333 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9dz9" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.928495 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.930782 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.933690 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.933846 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.936798 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-26vwp" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.936987 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-run-httpd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940240 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-log-httpd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940292 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940426 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-config-data\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940482 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mglkd\" (UniqueName: \"kubernetes.io/projected/f693abe9-5b02-4359-8522-bc89360df2b0-kube-api-access-mglkd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.940525 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-scripts\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.944826 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-scripts\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.946324 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-run-httpd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.946546 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-log-httpd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.947951 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.950415 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-config-data\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.951802 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.968948 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.975450 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:39 crc kubenswrapper[4957]: I1128 21:10:39.989571 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglkd\" (UniqueName: \"kubernetes.io/projected/f693abe9-5b02-4359-8522-bc89360df2b0-kube-api-access-mglkd\") pod \"ceilometer-0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " pod="openstack/ceilometer-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.042691 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.042863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5cpm\" (UniqueName: \"kubernetes.io/projected/30dc29d5-d19f-4e84-b432-bbd13517930f-kube-api-access-s5cpm\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.042916 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-logs\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.042965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.043018 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.043100 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.043146 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.043177 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.054909 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.143566 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.145878 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.151033 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.151415 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.151979 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5cpm\" (UniqueName: \"kubernetes.io/projected/30dc29d5-d19f-4e84-b432-bbd13517930f-kube-api-access-s5cpm\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.152084 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-logs\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.153636 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-logs\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.153736 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.156349 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.156905 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.165719 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.165880 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.165920 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.165940 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.166554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.171673 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.174104 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.180527 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.180969 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.181599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.201413 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5cpm\" (UniqueName: \"kubernetes.io/projected/30dc29d5-d19f-4e84-b432-bbd13517930f-kube-api-access-s5cpm\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.212604 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cvtnr"] Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.227942 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.271801 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.273838 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.277877 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.278133 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2pq\" (UniqueName: \"kubernetes.io/projected/be8242f3-c4e1-4042-b678-a37f47092b7f-kube-api-access-wb2pq\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.278452 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.278965 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.279152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-logs\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.279415 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: W1128 21:10:40.290077 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd8fcc2_cbe5_4677_95b2_009ed030745d.slice/crio-f2a6a50507ec7a72ed174dd51e7cdfba76f41e31761f89274a0c8a68c77abc6f WatchSource:0}: Error finding container f2a6a50507ec7a72ed174dd51e7cdfba76f41e31761f89274a0c8a68c77abc6f: Status 404 returned error can't find the container with id f2a6a50507ec7a72ed174dd51e7cdfba76f41e31761f89274a0c8a68c77abc6f Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.315103 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.389986 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-sb\") pod \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.390359 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t9bx\" (UniqueName: \"kubernetes.io/projected/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-kube-api-access-6t9bx\") pod \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.390474 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-svc\") pod \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.390598 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-config\") pod \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.390730 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-swift-storage-0\") pod \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.390770 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-nb\") pod \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\" (UID: \"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9\") " Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.390998 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2pq\" (UniqueName: \"kubernetes.io/projected/be8242f3-c4e1-4042-b678-a37f47092b7f-kube-api-access-wb2pq\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391022 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391117 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391154 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-logs\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391180 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391263 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391294 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.391333 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.393112 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.395449 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-logs\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.396684 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.449396 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.456617 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.475746 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.489173 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-t8ts5"] Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.491866 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-kube-api-access-6t9bx" (OuterVolumeSpecName: "kube-api-access-6t9bx") pod "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" (UID: "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9"). InnerVolumeSpecName "kube-api-access-6t9bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.494866 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t9bx\" (UniqueName: \"kubernetes.io/projected/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-kube-api-access-6t9bx\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.494993 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.497952 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.498019 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2pq\" (UniqueName: \"kubernetes.io/projected/be8242f3-c4e1-4042-b678-a37f47092b7f-kube-api-access-wb2pq\") pod \"glance-default-external-api-0\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.503129 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.519352 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" event={"ID":"83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9","Type":"ContainerDied","Data":"24588ce76901811cc0c88d32ff7c6a9135e7b888b1f44a0751d70899a681dc4f"} Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.519404 4957 scope.go:117] "RemoveContainer" containerID="e22e24c80c15a0e5f16d06f39d369ddb7c1afad34d7ca45c6a5be625f907646e" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.519374 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-8khxm" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.524495 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"f101a7233fc82a0da07c8fa09d39544890b7480c6753772c083a17bd3f35908d"} Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.527779 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.537402 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvtnr" event={"ID":"edd8fcc2-cbe5-4677-95b2-009ed030745d","Type":"ContainerStarted","Data":"f2a6a50507ec7a72ed174dd51e7cdfba76f41e31761f89274a0c8a68c77abc6f"} Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.558767 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" (UID: "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.564131 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" (UID: "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.583434 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" (UID: "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.585171 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" (UID: "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.597782 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.601966 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.601985 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.602026 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.623725 4957 scope.go:117] "RemoveContainer" containerID="51bd9b495b29bbbf815063ce470d15d7a13ac328ed2c0c266ae4c3bf97eb1abb" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.629809 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-config" (OuterVolumeSpecName: "config") pod "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" (UID: "83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.703571 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.950688 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7skhk"] Nov 28 21:10:40 crc kubenswrapper[4957]: I1128 21:10:40.972524 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lm4b6"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.011495 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8khxm"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.029701 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-8khxm"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.428990 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.513822 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.567594 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lm4b6" event={"ID":"eb8d4ba5-28bb-41f2-8158-04d673e8ee19","Type":"ContainerStarted","Data":"4908f83b174af36fcd2b1c1bcea5d570c39c0ac08dc6da1f829c0246d6d0b817"} Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.582313 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvtnr" event={"ID":"edd8fcc2-cbe5-4677-95b2-009ed030745d","Type":"ContainerStarted","Data":"650e485e9fa708f9860ca253b6582dfa15af26b04c31300a18c975992a5823fa"} Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.592478 4957 generic.go:334] "Generic (PLEG): container finished" podID="93d1f696-9f60-4f4e-817f-98ad8c3800a1" containerID="adc3a5adb6d020827250fff58b183098047b90417b5d7d87014c7bb3c7f580b6" exitCode=0 Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.592539 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" event={"ID":"93d1f696-9f60-4f4e-817f-98ad8c3800a1","Type":"ContainerDied","Data":"adc3a5adb6d020827250fff58b183098047b90417b5d7d87014c7bb3c7f580b6"} Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.592566 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" event={"ID":"93d1f696-9f60-4f4e-817f-98ad8c3800a1","Type":"ContainerStarted","Data":"6aed7a886a7b835e47de81062c1b208c96d337420a68f7af695dff91f5c0a705"} Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.593428 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v9dz9"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.608829 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tzbj8"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.613960 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7skhk" event={"ID":"8bcfaba7-030f-4415-b1c0-79820941039b","Type":"ContainerStarted","Data":"616f113bb64fe4fd85dec432f17e8bb9482c186690eab9524bebdbf24051f5a4"} Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.620974 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ff8lc"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.653256 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wnxk5"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.656739 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cvtnr" podStartSLOduration=3.656708523 podStartE2EDuration="3.656708523s" podCreationTimestamp="2025-11-28 21:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:41.634000094 +0000 UTC m=+1281.102648003" watchObservedRunningTime="2025-11-28 21:10:41.656708523 +0000 UTC m=+1281.125356442" Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.688755 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:10:41 crc kubenswrapper[4957]: W1128 21:10:41.730294 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf693abe9_5b02_4359_8522_bc89360df2b0.slice/crio-69e7d3511799e55f820d3d973069193a6c98776bdb233fcc51967f5b2b6f98a1 WatchSource:0}: Error finding container 69e7d3511799e55f820d3d973069193a6c98776bdb233fcc51967f5b2b6f98a1: Status 404 returned error can't find the container with id 69e7d3511799e55f820d3d973069193a6c98776bdb233fcc51967f5b2b6f98a1 Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.841246 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:10:41 crc kubenswrapper[4957]: I1128 21:10:41.984959 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.225455 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.282862 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-nb\") pod \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.282964 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-svc\") pod \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.283132 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-swift-storage-0\") pod \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.283199 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5h6k\" (UniqueName: \"kubernetes.io/projected/93d1f696-9f60-4f4e-817f-98ad8c3800a1-kube-api-access-r5h6k\") pod \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.283297 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-sb\") pod \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.283338 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-config\") pod \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\" (UID: \"93d1f696-9f60-4f4e-817f-98ad8c3800a1\") " Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.287745 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d1f696-9f60-4f4e-817f-98ad8c3800a1-kube-api-access-r5h6k" (OuterVolumeSpecName: "kube-api-access-r5h6k") pod "93d1f696-9f60-4f4e-817f-98ad8c3800a1" (UID: "93d1f696-9f60-4f4e-817f-98ad8c3800a1"). InnerVolumeSpecName "kube-api-access-r5h6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.312524 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93d1f696-9f60-4f4e-817f-98ad8c3800a1" (UID: "93d1f696-9f60-4f4e-817f-98ad8c3800a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.319452 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-config" (OuterVolumeSpecName: "config") pod "93d1f696-9f60-4f4e-817f-98ad8c3800a1" (UID: "93d1f696-9f60-4f4e-817f-98ad8c3800a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.323440 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93d1f696-9f60-4f4e-817f-98ad8c3800a1" (UID: "93d1f696-9f60-4f4e-817f-98ad8c3800a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.328895 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93d1f696-9f60-4f4e-817f-98ad8c3800a1" (UID: "93d1f696-9f60-4f4e-817f-98ad8c3800a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.343265 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93d1f696-9f60-4f4e-817f-98ad8c3800a1" (UID: "93d1f696-9f60-4f4e-817f-98ad8c3800a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.386285 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.386808 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.386833 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.386845 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.386854 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93d1f696-9f60-4f4e-817f-98ad8c3800a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.386862 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5h6k\" (UniqueName: \"kubernetes.io/projected/93d1f696-9f60-4f4e-817f-98ad8c3800a1-kube-api-access-r5h6k\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.624955 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerStarted","Data":"69e7d3511799e55f820d3d973069193a6c98776bdb233fcc51967f5b2b6f98a1"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.628195 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tzbj8" event={"ID":"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8","Type":"ContainerStarted","Data":"ea6d41b09b67cc6934e4ec92ac366238ba3a251e032f17bbbefba63fff87adb1"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.628236 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tzbj8" event={"ID":"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8","Type":"ContainerStarted","Data":"ad963f1325f0b032d9bb7c03390346e7462bb3221e507a8bdaa43d607c8be5da"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.630174 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9dz9" event={"ID":"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb","Type":"ContainerStarted","Data":"ed32afdaa191e764ed98c35ccdee20a125ac2f9bade96bc7da2012716607d774"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.631343 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnxk5" event={"ID":"1717eaea-018a-4e9d-af82-ce7b3fb3868e","Type":"ContainerStarted","Data":"adc4cfcf3bdef0807a2fa24fabe2d6e44be4eafcafdfd3174e161cda966480fd"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.632334 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"30dc29d5-d19f-4e84-b432-bbd13517930f","Type":"ContainerStarted","Data":"9729addf83272178c2630c3aa14e39594d85f652be94892e355edcca8b84ec3a"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.634171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" event={"ID":"93d1f696-9f60-4f4e-817f-98ad8c3800a1","Type":"ContainerDied","Data":"6aed7a886a7b835e47de81062c1b208c96d337420a68f7af695dff91f5c0a705"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.634202 4957 scope.go:117] "RemoveContainer" containerID="adc3a5adb6d020827250fff58b183098047b90417b5d7d87014c7bb3c7f580b6" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.634314 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-t8ts5" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.643266 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tzbj8" podStartSLOduration=3.643249115 podStartE2EDuration="3.643249115s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:42.643127342 +0000 UTC m=+1282.111775251" watchObservedRunningTime="2025-11-28 21:10:42.643249115 +0000 UTC m=+1282.111897024" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.647281 4957 generic.go:334] "Generic (PLEG): container finished" podID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerID="2ae8e9823ad342a1a3c6c272ebd4a9ad8119ef546e0bdf93a607651264c30b7b" exitCode=0 Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.648302 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" event={"ID":"6a202627-5f8e-4fc1-a99f-741e57e7e973","Type":"ContainerDied","Data":"2ae8e9823ad342a1a3c6c272ebd4a9ad8119ef546e0bdf93a607651264c30b7b"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.648334 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" event={"ID":"6a202627-5f8e-4fc1-a99f-741e57e7e973","Type":"ContainerStarted","Data":"52cbc095ecfab445ffbdb5bbfccc0197023b72c1de1e180f77586617a52280be"} Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.801450 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-t8ts5"] Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.830404 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" path="/var/lib/kubelet/pods/83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9/volumes" Nov 28 21:10:42 crc kubenswrapper[4957]: I1128 21:10:42.836051 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-t8ts5"] Nov 28 21:10:43 crc kubenswrapper[4957]: I1128 21:10:43.021058 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:43 crc kubenswrapper[4957]: W1128 21:10:43.028296 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe8242f3_c4e1_4042_b678_a37f47092b7f.slice/crio-a58b8b0a3b8e474d158e48c35977d2cc1c1f4ff8ae1012f8d2e2be5207d3d745 WatchSource:0}: Error finding container a58b8b0a3b8e474d158e48c35977d2cc1c1f4ff8ae1012f8d2e2be5207d3d745: Status 404 returned error can't find the container with id a58b8b0a3b8e474d158e48c35977d2cc1c1f4ff8ae1012f8d2e2be5207d3d745 Nov 28 21:10:43 crc kubenswrapper[4957]: I1128 21:10:43.663736 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" event={"ID":"6a202627-5f8e-4fc1-a99f-741e57e7e973","Type":"ContainerStarted","Data":"5cd569d7535acc4d423ce00d03a4e12e51ccc11b41ea84b273629b5c20c7bc96"} Nov 28 21:10:43 crc kubenswrapper[4957]: I1128 21:10:43.665455 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:43 crc kubenswrapper[4957]: I1128 21:10:43.680494 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be8242f3-c4e1-4042-b678-a37f47092b7f","Type":"ContainerStarted","Data":"a58b8b0a3b8e474d158e48c35977d2cc1c1f4ff8ae1012f8d2e2be5207d3d745"} Nov 28 21:10:43 crc kubenswrapper[4957]: I1128 21:10:43.682050 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"30dc29d5-d19f-4e84-b432-bbd13517930f","Type":"ContainerStarted","Data":"20b03f8b4cd2498568b9b8f605968641a76be8942008262d71312e8a67d9d680"} Nov 28 21:10:43 crc kubenswrapper[4957]: I1128 21:10:43.701901 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" podStartSLOduration=4.701885193 podStartE2EDuration="4.701885193s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:43.697785632 +0000 UTC m=+1283.166433541" watchObservedRunningTime="2025-11-28 21:10:43.701885193 +0000 UTC m=+1283.170533102" Nov 28 21:10:44 crc kubenswrapper[4957]: I1128 21:10:44.723228 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"30dc29d5-d19f-4e84-b432-bbd13517930f","Type":"ContainerStarted","Data":"e0ebc49726b1ba0af6abe28d98ce90b69b0ed70729843dcd307cecb74e759c9d"} Nov 28 21:10:44 crc kubenswrapper[4957]: I1128 21:10:44.723306 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-log" containerID="cri-o://20b03f8b4cd2498568b9b8f605968641a76be8942008262d71312e8a67d9d680" gracePeriod=30 Nov 28 21:10:44 crc kubenswrapper[4957]: I1128 21:10:44.723422 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-httpd" containerID="cri-o://e0ebc49726b1ba0af6abe28d98ce90b69b0ed70729843dcd307cecb74e759c9d" gracePeriod=30 Nov 28 21:10:44 crc kubenswrapper[4957]: I1128 21:10:44.732031 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be8242f3-c4e1-4042-b678-a37f47092b7f","Type":"ContainerStarted","Data":"cc0f3b4be5e50e018a028766dacf11369eacd1a3d8323ca59ee6232466f2c96e"} Nov 28 21:10:44 crc kubenswrapper[4957]: I1128 21:10:44.752746 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.752725288 podStartE2EDuration="6.752725288s" podCreationTimestamp="2025-11-28 21:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:44.745737135 +0000 UTC m=+1284.214385044" watchObservedRunningTime="2025-11-28 21:10:44.752725288 +0000 UTC m=+1284.221373197" Nov 28 21:10:44 crc kubenswrapper[4957]: I1128 21:10:44.828822 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d1f696-9f60-4f4e-817f-98ad8c3800a1" path="/var/lib/kubelet/pods/93d1f696-9f60-4f4e-817f-98ad8c3800a1/volumes" Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.760474 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be8242f3-c4e1-4042-b678-a37f47092b7f","Type":"ContainerStarted","Data":"78b499703c0f329b495ed61da25fa7954badc2f8a5627ac6ef296be5d3f99276"} Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.760560 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-log" containerID="cri-o://cc0f3b4be5e50e018a028766dacf11369eacd1a3d8323ca59ee6232466f2c96e" gracePeriod=30 Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.760624 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-httpd" containerID="cri-o://78b499703c0f329b495ed61da25fa7954badc2f8a5627ac6ef296be5d3f99276" gracePeriod=30 Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.763880 4957 generic.go:334] "Generic (PLEG): container finished" podID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerID="e0ebc49726b1ba0af6abe28d98ce90b69b0ed70729843dcd307cecb74e759c9d" exitCode=0 Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.763921 4957 generic.go:334] "Generic (PLEG): container finished" podID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerID="20b03f8b4cd2498568b9b8f605968641a76be8942008262d71312e8a67d9d680" exitCode=143 Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.763956 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"30dc29d5-d19f-4e84-b432-bbd13517930f","Type":"ContainerDied","Data":"e0ebc49726b1ba0af6abe28d98ce90b69b0ed70729843dcd307cecb74e759c9d"} Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.764047 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"30dc29d5-d19f-4e84-b432-bbd13517930f","Type":"ContainerDied","Data":"20b03f8b4cd2498568b9b8f605968641a76be8942008262d71312e8a67d9d680"} Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.776188 4957 generic.go:334] "Generic (PLEG): container finished" podID="edd8fcc2-cbe5-4677-95b2-009ed030745d" containerID="650e485e9fa708f9860ca253b6582dfa15af26b04c31300a18c975992a5823fa" exitCode=0 Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.776231 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvtnr" event={"ID":"edd8fcc2-cbe5-4677-95b2-009ed030745d","Type":"ContainerDied","Data":"650e485e9fa708f9860ca253b6582dfa15af26b04c31300a18c975992a5823fa"} Nov 28 21:10:45 crc kubenswrapper[4957]: I1128 21:10:45.793089 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.793071245 podStartE2EDuration="6.793071245s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:10:45.780094095 +0000 UTC m=+1285.248742004" watchObservedRunningTime="2025-11-28 21:10:45.793071245 +0000 UTC m=+1285.261719154" Nov 28 21:10:46 crc kubenswrapper[4957]: I1128 21:10:46.817523 4957 generic.go:334] "Generic (PLEG): container finished" podID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerID="78b499703c0f329b495ed61da25fa7954badc2f8a5627ac6ef296be5d3f99276" exitCode=0 Nov 28 21:10:46 crc kubenswrapper[4957]: I1128 21:10:46.817801 4957 generic.go:334] "Generic (PLEG): container finished" podID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerID="cc0f3b4be5e50e018a028766dacf11369eacd1a3d8323ca59ee6232466f2c96e" exitCode=143 Nov 28 21:10:46 crc kubenswrapper[4957]: I1128 21:10:46.826275 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be8242f3-c4e1-4042-b678-a37f47092b7f","Type":"ContainerDied","Data":"78b499703c0f329b495ed61da25fa7954badc2f8a5627ac6ef296be5d3f99276"} Nov 28 21:10:46 crc kubenswrapper[4957]: I1128 21:10:46.826317 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be8242f3-c4e1-4042-b678-a37f47092b7f","Type":"ContainerDied","Data":"cc0f3b4be5e50e018a028766dacf11369eacd1a3d8323ca59ee6232466f2c96e"} Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.467906 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.472733 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.541315 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-combined-ca-bundle\") pod \"edd8fcc2-cbe5-4677-95b2-009ed030745d\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.541433 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-httpd-run\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.541483 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-fernet-keys\") pod \"edd8fcc2-cbe5-4677-95b2-009ed030745d\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.541952 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542246 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-config-data\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542283 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-logs\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542303 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-scripts\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542376 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45qxh\" (UniqueName: \"kubernetes.io/projected/edd8fcc2-cbe5-4677-95b2-009ed030745d-kube-api-access-45qxh\") pod \"edd8fcc2-cbe5-4677-95b2-009ed030745d\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542398 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542441 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5cpm\" (UniqueName: \"kubernetes.io/projected/30dc29d5-d19f-4e84-b432-bbd13517930f-kube-api-access-s5cpm\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542469 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-internal-tls-certs\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-config-data\") pod \"edd8fcc2-cbe5-4677-95b2-009ed030745d\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542526 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-combined-ca-bundle\") pod \"30dc29d5-d19f-4e84-b432-bbd13517930f\" (UID: \"30dc29d5-d19f-4e84-b432-bbd13517930f\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542621 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-credential-keys\") pod \"edd8fcc2-cbe5-4677-95b2-009ed030745d\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.542687 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-scripts\") pod \"edd8fcc2-cbe5-4677-95b2-009ed030745d\" (UID: \"edd8fcc2-cbe5-4677-95b2-009ed030745d\") " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.543179 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.543361 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-logs" (OuterVolumeSpecName: "logs") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.547979 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-scripts" (OuterVolumeSpecName: "scripts") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.549150 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "edd8fcc2-cbe5-4677-95b2-009ed030745d" (UID: "edd8fcc2-cbe5-4677-95b2-009ed030745d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.549371 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "edd8fcc2-cbe5-4677-95b2-009ed030745d" (UID: "edd8fcc2-cbe5-4677-95b2-009ed030745d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.549936 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd8fcc2-cbe5-4677-95b2-009ed030745d-kube-api-access-45qxh" (OuterVolumeSpecName: "kube-api-access-45qxh") pod "edd8fcc2-cbe5-4677-95b2-009ed030745d" (UID: "edd8fcc2-cbe5-4677-95b2-009ed030745d"). InnerVolumeSpecName "kube-api-access-45qxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.550251 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-scripts" (OuterVolumeSpecName: "scripts") pod "edd8fcc2-cbe5-4677-95b2-009ed030745d" (UID: "edd8fcc2-cbe5-4677-95b2-009ed030745d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.550624 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.557474 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dc29d5-d19f-4e84-b432-bbd13517930f-kube-api-access-s5cpm" (OuterVolumeSpecName: "kube-api-access-s5cpm") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "kube-api-access-s5cpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.584290 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd8fcc2-cbe5-4677-95b2-009ed030745d" (UID: "edd8fcc2-cbe5-4677-95b2-009ed030745d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.594874 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-config-data" (OuterVolumeSpecName: "config-data") pod "edd8fcc2-cbe5-4677-95b2-009ed030745d" (UID: "edd8fcc2-cbe5-4677-95b2-009ed030745d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.603603 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.623387 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-config-data" (OuterVolumeSpecName: "config-data") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.636829 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "30dc29d5-d19f-4e84-b432-bbd13517930f" (UID: "30dc29d5-d19f-4e84-b432-bbd13517930f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650019 4957 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650089 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650106 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650119 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650130 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650167 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30dc29d5-d19f-4e84-b432-bbd13517930f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650180 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650193 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45qxh\" (UniqueName: \"kubernetes.io/projected/edd8fcc2-cbe5-4677-95b2-009ed030745d-kube-api-access-45qxh\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650373 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650392 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5cpm\" (UniqueName: \"kubernetes.io/projected/30dc29d5-d19f-4e84-b432-bbd13517930f-kube-api-access-s5cpm\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650408 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650471 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd8fcc2-cbe5-4677-95b2-009ed030745d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.650483 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc29d5-d19f-4e84-b432-bbd13517930f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.673746 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.757493 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.850638 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvtnr" event={"ID":"edd8fcc2-cbe5-4677-95b2-009ed030745d","Type":"ContainerDied","Data":"f2a6a50507ec7a72ed174dd51e7cdfba76f41e31761f89274a0c8a68c77abc6f"} Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.850681 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a6a50507ec7a72ed174dd51e7cdfba76f41e31761f89274a0c8a68c77abc6f" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.850745 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvtnr" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.853455 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"30dc29d5-d19f-4e84-b432-bbd13517930f","Type":"ContainerDied","Data":"9729addf83272178c2630c3aa14e39594d85f652be94892e355edcca8b84ec3a"} Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.853679 4957 scope.go:117] "RemoveContainer" containerID="e0ebc49726b1ba0af6abe28d98ce90b69b0ed70729843dcd307cecb74e759c9d" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.853862 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.892475 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cvtnr"] Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.908561 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cvtnr"] Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.934129 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.949684 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975408 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:47 crc kubenswrapper[4957]: E1128 21:10:47.975860 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-log" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975871 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-log" Nov 28 21:10:47 crc kubenswrapper[4957]: E1128 21:10:47.975886 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d1f696-9f60-4f4e-817f-98ad8c3800a1" containerName="init" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975893 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d1f696-9f60-4f4e-817f-98ad8c3800a1" containerName="init" Nov 28 21:10:47 crc kubenswrapper[4957]: E1128 21:10:47.975907 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-httpd" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975912 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-httpd" Nov 28 21:10:47 crc kubenswrapper[4957]: E1128 21:10:47.975921 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerName="init" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975927 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerName="init" Nov 28 21:10:47 crc kubenswrapper[4957]: E1128 21:10:47.975936 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerName="dnsmasq-dns" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975941 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerName="dnsmasq-dns" Nov 28 21:10:47 crc kubenswrapper[4957]: E1128 21:10:47.975952 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd8fcc2-cbe5-4677-95b2-009ed030745d" containerName="keystone-bootstrap" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.975958 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd8fcc2-cbe5-4677-95b2-009ed030745d" containerName="keystone-bootstrap" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.976150 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fb16a0-4b4b-4ea4-b8fd-3f6cf26d1fe9" containerName="dnsmasq-dns" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.976169 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-httpd" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.976180 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d1f696-9f60-4f4e-817f-98ad8c3800a1" containerName="init" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.976194 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" containerName="glance-log" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.978074 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd8fcc2-cbe5-4677-95b2-009ed030745d" containerName="keystone-bootstrap" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.979203 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.986273 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.986318 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 21:10:47 crc kubenswrapper[4957]: I1128 21:10:47.989362 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:47.999193 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5gcld"] Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.001731 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.009833 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.010025 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.010036 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.010145 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zhjp" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.010241 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.021369 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5gcld"] Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073025 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073067 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073111 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-combined-ca-bundle\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073130 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-credential-keys\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073291 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073399 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073448 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-config-data\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073500 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fml\" (UniqueName: \"kubernetes.io/projected/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-kube-api-access-42fml\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073578 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073609 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-fernet-keys\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073685 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-scripts\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073772 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jsz\" (UniqueName: \"kubernetes.io/projected/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-kube-api-access-44jsz\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.073861 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.176971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fml\" (UniqueName: \"kubernetes.io/projected/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-kube-api-access-42fml\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.177067 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185015 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-fernet-keys\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185112 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-scripts\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185223 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jsz\" (UniqueName: \"kubernetes.io/projected/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-kube-api-access-44jsz\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185462 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185501 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-combined-ca-bundle\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185618 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-credential-keys\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185669 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185725 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185883 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-config-data\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.185922 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.189470 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.192830 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-fernet-keys\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.193450 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.195168 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.197180 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-combined-ca-bundle\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.199485 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fml\" (UniqueName: \"kubernetes.io/projected/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-kube-api-access-42fml\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.199516 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.205090 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.205035 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.211798 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-scripts\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.212505 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.213052 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-config-data\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.214852 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-credential-keys\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.215336 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jsz\" (UniqueName: \"kubernetes.io/projected/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-kube-api-access-44jsz\") pod \"keystone-bootstrap-5gcld\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.239572 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.301743 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.322802 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.829047 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dc29d5-d19f-4e84-b432-bbd13517930f" path="/var/lib/kubelet/pods/30dc29d5-d19f-4e84-b432-bbd13517930f/volumes" Nov 28 21:10:48 crc kubenswrapper[4957]: I1128 21:10:48.829912 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd8fcc2-cbe5-4677-95b2-009ed030745d" path="/var/lib/kubelet/pods/edd8fcc2-cbe5-4677-95b2-009ed030745d/volumes" Nov 28 21:10:49 crc kubenswrapper[4957]: I1128 21:10:49.949452 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:10:50 crc kubenswrapper[4957]: I1128 21:10:50.002913 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hdgrw"] Nov 28 21:10:50 crc kubenswrapper[4957]: I1128 21:10:50.003196 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hdgrw" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" containerID="cri-o://cc2641fa24f7103798753365129c09b6c64c000e64451f5099c5bd16a07fbc5d" gracePeriod=10 Nov 28 21:10:50 crc kubenswrapper[4957]: I1128 21:10:50.887870 4957 generic.go:334] "Generic (PLEG): container finished" podID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerID="cc2641fa24f7103798753365129c09b6c64c000e64451f5099c5bd16a07fbc5d" exitCode=0 Nov 28 21:10:50 crc kubenswrapper[4957]: I1128 21:10:50.887908 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hdgrw" event={"ID":"e2f92833-2316-4e8c-ae77-e50ee7b07d4f","Type":"ContainerDied","Data":"cc2641fa24f7103798753365129c09b6c64c000e64451f5099c5bd16a07fbc5d"} Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.458795 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523182 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-config-data\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523339 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523431 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-combined-ca-bundle\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523456 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-public-tls-certs\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-logs\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-scripts\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523614 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-httpd-run\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.523679 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2pq\" (UniqueName: \"kubernetes.io/projected/be8242f3-c4e1-4042-b678-a37f47092b7f-kube-api-access-wb2pq\") pod \"be8242f3-c4e1-4042-b678-a37f47092b7f\" (UID: \"be8242f3-c4e1-4042-b678-a37f47092b7f\") " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.526558 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.527084 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-logs" (OuterVolumeSpecName: "logs") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.532230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-scripts" (OuterVolumeSpecName: "scripts") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.532465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.549701 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8242f3-c4e1-4042-b678-a37f47092b7f-kube-api-access-wb2pq" (OuterVolumeSpecName: "kube-api-access-wb2pq") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "kube-api-access-wb2pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.558278 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.584421 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.614016 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-config-data" (OuterVolumeSpecName: "config-data") pod "be8242f3-c4e1-4042-b678-a37f47092b7f" (UID: "be8242f3-c4e1-4042-b678-a37f47092b7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626572 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626612 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626623 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2pq\" (UniqueName: \"kubernetes.io/projected/be8242f3-c4e1-4042-b678-a37f47092b7f-kube-api-access-wb2pq\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626633 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626665 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626676 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626685 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be8242f3-c4e1-4042-b678-a37f47092b7f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.626693 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be8242f3-c4e1-4042-b678-a37f47092b7f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.655454 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.728502 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.935561 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be8242f3-c4e1-4042-b678-a37f47092b7f","Type":"ContainerDied","Data":"a58b8b0a3b8e474d158e48c35977d2cc1c1f4ff8ae1012f8d2e2be5207d3d745"} Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.935604 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.982543 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:54 crc kubenswrapper[4957]: I1128 21:10:54.999589 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.007888 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:55 crc kubenswrapper[4957]: E1128 21:10:55.008409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-log" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.008427 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-log" Nov 28 21:10:55 crc kubenswrapper[4957]: E1128 21:10:55.008459 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-httpd" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.008465 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-httpd" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.008673 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-httpd" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.008698 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" containerName="glance-log" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.009792 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.012335 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.013401 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.017875 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.034871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.034928 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-config-data\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.034970 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-scripts\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.034986 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.035017 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69m5\" (UniqueName: \"kubernetes.io/projected/33e364e1-d026-4648-b15c-8131dc797463-kube-api-access-v69m5\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.035044 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.035140 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.035261 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-logs\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137269 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-config-data\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137342 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-scripts\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137364 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137404 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v69m5\" (UniqueName: \"kubernetes.io/projected/33e364e1-d026-4648-b15c-8131dc797463-kube-api-access-v69m5\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137436 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137542 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137566 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-logs\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137602 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.137979 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.138297 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-logs\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.141674 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.142187 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.143480 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-config-data\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.143858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-scripts\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.155523 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v69m5\" (UniqueName: \"kubernetes.io/projected/33e364e1-d026-4648-b15c-8131dc797463-kube-api-access-v69m5\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.174861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " pod="openstack/glance-default-external-api-0" Nov 28 21:10:55 crc kubenswrapper[4957]: I1128 21:10:55.341903 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:10:56 crc kubenswrapper[4957]: I1128 21:10:56.824329 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8242f3-c4e1-4042-b678-a37f47092b7f" path="/var/lib/kubelet/pods/be8242f3-c4e1-4042-b678-a37f47092b7f/volumes" Nov 28 21:10:58 crc kubenswrapper[4957]: E1128 21:10:58.429649 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 28 21:10:58 crc kubenswrapper[4957]: E1128 21:10:58.430544 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhc8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7skhk_openstack(8bcfaba7-030f-4415-b1c0-79820941039b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:10:58 crc kubenswrapper[4957]: E1128 21:10:58.431882 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-7skhk" podUID="8bcfaba7-030f-4415-b1c0-79820941039b" Nov 28 21:10:58 crc kubenswrapper[4957]: I1128 21:10:58.972189 4957 generic.go:334] "Generic (PLEG): container finished" podID="736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" containerID="ea6d41b09b67cc6934e4ec92ac366238ba3a251e032f17bbbefba63fff87adb1" exitCode=0 Nov 28 21:10:58 crc kubenswrapper[4957]: I1128 21:10:58.973315 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tzbj8" event={"ID":"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8","Type":"ContainerDied","Data":"ea6d41b09b67cc6934e4ec92ac366238ba3a251e032f17bbbefba63fff87adb1"} Nov 28 21:10:58 crc kubenswrapper[4957]: E1128 21:10:58.974459 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-7skhk" podUID="8bcfaba7-030f-4415-b1c0-79820941039b" Nov 28 21:10:59 crc kubenswrapper[4957]: I1128 21:10:59.848258 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hdgrw" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Nov 28 21:11:04 crc kubenswrapper[4957]: I1128 21:11:04.848982 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hdgrw" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.156842 4957 scope.go:117] "RemoveContainer" containerID="20b03f8b4cd2498568b9b8f605968641a76be8942008262d71312e8a67d9d680" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.298998 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.367997 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-nb\") pod \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.368064 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m94dx\" (UniqueName: \"kubernetes.io/projected/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-kube-api-access-m94dx\") pod \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.368129 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-dns-svc\") pod \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.368276 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-sb\") pod \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.368397 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-config\") pod \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\" (UID: \"e2f92833-2316-4e8c-ae77-e50ee7b07d4f\") " Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.389702 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-kube-api-access-m94dx" (OuterVolumeSpecName: "kube-api-access-m94dx") pod "e2f92833-2316-4e8c-ae77-e50ee7b07d4f" (UID: "e2f92833-2316-4e8c-ae77-e50ee7b07d4f"). InnerVolumeSpecName "kube-api-access-m94dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.471477 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m94dx\" (UniqueName: \"kubernetes.io/projected/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-kube-api-access-m94dx\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.513970 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2f92833-2316-4e8c-ae77-e50ee7b07d4f" (UID: "e2f92833-2316-4e8c-ae77-e50ee7b07d4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.520434 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-config" (OuterVolumeSpecName: "config") pod "e2f92833-2316-4e8c-ae77-e50ee7b07d4f" (UID: "e2f92833-2316-4e8c-ae77-e50ee7b07d4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.529278 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2f92833-2316-4e8c-ae77-e50ee7b07d4f" (UID: "e2f92833-2316-4e8c-ae77-e50ee7b07d4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.541793 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2f92833-2316-4e8c-ae77-e50ee7b07d4f" (UID: "e2f92833-2316-4e8c-ae77-e50ee7b07d4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.573774 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.573799 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.573808 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:06 crc kubenswrapper[4957]: I1128 21:11:06.573816 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f92833-2316-4e8c-ae77-e50ee7b07d4f-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.015838 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:11:07 crc kubenswrapper[4957]: E1128 21:11:07.018382 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 28 21:11:07 crc kubenswrapper[4957]: E1128 21:11:07.018569 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wnxk5_openstack(1717eaea-018a-4e9d-af82-ce7b3fb3868e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:11:07 crc kubenswrapper[4957]: E1128 21:11:07.019701 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wnxk5" podUID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.082074 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tzbj8" event={"ID":"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8","Type":"ContainerDied","Data":"ad963f1325f0b032d9bb7c03390346e7462bb3221e507a8bdaa43d607c8be5da"} Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.082101 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tzbj8" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.082118 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad963f1325f0b032d9bb7c03390346e7462bb3221e507a8bdaa43d607c8be5da" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.085080 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hdgrw" event={"ID":"e2f92833-2316-4e8c-ae77-e50ee7b07d4f","Type":"ContainerDied","Data":"de5478ab264462123bfc8125b04057b07b43858297b3ff92b1569291fefd1e4a"} Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.085122 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hdgrw" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.085292 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-combined-ca-bundle\") pod \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.085687 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-kube-api-access-lc6nv\") pod \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.085826 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-config\") pod \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\" (UID: \"736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8\") " Nov 28 21:11:07 crc kubenswrapper[4957]: E1128 21:11:07.086762 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wnxk5" podUID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.089406 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-kube-api-access-lc6nv" (OuterVolumeSpecName: "kube-api-access-lc6nv") pod "736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" (UID: "736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8"). InnerVolumeSpecName "kube-api-access-lc6nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.122039 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" (UID: "736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.129536 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hdgrw"] Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.139183 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hdgrw"] Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.139707 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-config" (OuterVolumeSpecName: "config") pod "736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" (UID: "736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.188377 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.189035 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:07 crc kubenswrapper[4957]: I1128 21:11:07.189104 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8-kube-api-access-lc6nv\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.167975 4957 scope.go:117] "RemoveContainer" containerID="78b499703c0f329b495ed61da25fa7954badc2f8a5627ac6ef296be5d3f99276" Nov 28 21:11:08 crc kubenswrapper[4957]: E1128 21:11:08.172747 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 28 21:11:08 crc kubenswrapper[4957]: E1128 21:11:08.172872 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w68g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-lm4b6_openstack(eb8d4ba5-28bb-41f2-8158-04d673e8ee19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:11:08 crc kubenswrapper[4957]: E1128 21:11:08.174100 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-lm4b6" podUID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.303345 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t9fwz"] Nov 28 21:11:08 crc kubenswrapper[4957]: E1128 21:11:08.304024 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="init" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.304037 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="init" Nov 28 21:11:08 crc kubenswrapper[4957]: E1128 21:11:08.304046 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.304052 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" Nov 28 21:11:08 crc kubenswrapper[4957]: E1128 21:11:08.304075 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" containerName="neutron-db-sync" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.304081 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" containerName="neutron-db-sync" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.304279 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" containerName="neutron-db-sync" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.304295 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.306662 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.367390 4957 scope.go:117] "RemoveContainer" containerID="cc0f3b4be5e50e018a028766dacf11369eacd1a3d8323ca59ee6232466f2c96e" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.419199 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjr6\" (UniqueName: \"kubernetes.io/projected/fdc03480-9a62-4730-a35b-7335deece98a-kube-api-access-fbjr6\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.419337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.419458 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.419492 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.419555 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.419657 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-config\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.467569 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t9fwz"] Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.482369 4957 scope.go:117] "RemoveContainer" containerID="cc2641fa24f7103798753365129c09b6c64c000e64451f5099c5bd16a07fbc5d" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.497261 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76cf686b44-wd445"] Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.499051 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.505737 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.506072 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s4ndb" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.509042 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.509187 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.512416 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76cf686b44-wd445"] Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.542547 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-config\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.542636 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjr6\" (UniqueName: \"kubernetes.io/projected/fdc03480-9a62-4730-a35b-7335deece98a-kube-api-access-fbjr6\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.542664 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.542730 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.542768 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.542892 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.545798 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.545845 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.545852 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.546375 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-config\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.549020 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.588734 4957 scope.go:117] "RemoveContainer" containerID="fb6361d1da4f2a42d13dc0e06707f7ca5123443b3ce0275e20363eedacdc5014" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.590940 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjr6\" (UniqueName: \"kubernetes.io/projected/fdc03480-9a62-4730-a35b-7335deece98a-kube-api-access-fbjr6\") pod \"dnsmasq-dns-55f844cf75-t9fwz\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.645158 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-ovndb-tls-certs\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.645316 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-config\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.645642 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-httpd-config\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.645750 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-combined-ca-bundle\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.645872 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqqf\" (UniqueName: \"kubernetes.io/projected/172c2933-99bf-430b-bae8-2e66b1d8c0c0-kube-api-access-4vqqf\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.751698 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-httpd-config\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.751988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-combined-ca-bundle\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.752016 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqqf\" (UniqueName: \"kubernetes.io/projected/172c2933-99bf-430b-bae8-2e66b1d8c0c0-kube-api-access-4vqqf\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.752102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-ovndb-tls-certs\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.752164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-config\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.757426 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-combined-ca-bundle\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.758715 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-config\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.760493 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-httpd-config\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.761555 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-ovndb-tls-certs\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.780570 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqqf\" (UniqueName: \"kubernetes.io/projected/172c2933-99bf-430b-bae8-2e66b1d8c0c0-kube-api-access-4vqqf\") pod \"neutron-76cf686b44-wd445\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.858761 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.884247 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" path="/var/lib/kubelet/pods/e2f92833-2316-4e8c-ae77-e50ee7b07d4f/volumes" Nov 28 21:11:08 crc kubenswrapper[4957]: I1128 21:11:08.924927 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5gcld"] Nov 28 21:11:08 crc kubenswrapper[4957]: W1128 21:11:08.930090 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f3672ea_b9dd_4253_8a05_7bdff6a0d9f1.slice/crio-5d2b7ace8ffdb30210195ec7e5c55029c5ee31c10f49d6d0a4dd7b999df2649a WatchSource:0}: Error finding container 5d2b7ace8ffdb30210195ec7e5c55029c5ee31c10f49d6d0a4dd7b999df2649a: Status 404 returned error can't find the container with id 5d2b7ace8ffdb30210195ec7e5c55029c5ee31c10f49d6d0a4dd7b999df2649a Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.013119 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.116597 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9dz9" event={"ID":"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb","Type":"ContainerStarted","Data":"b4eec643e13f90868dfe11ab06c8c7c77004d3dd8d5555762d606ab2fb302d38"} Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.131768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5gcld" event={"ID":"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1","Type":"ContainerStarted","Data":"5d2b7ace8ffdb30210195ec7e5c55029c5ee31c10f49d6d0a4dd7b999df2649a"} Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.144268 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.163856 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v9dz9" podStartSLOduration=3.6550602359999997 podStartE2EDuration="30.163836594s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="2025-11-28 21:10:41.63017072 +0000 UTC m=+1281.098818629" lastFinishedPulling="2025-11-28 21:11:08.138947078 +0000 UTC m=+1307.607594987" observedRunningTime="2025-11-28 21:11:09.146842185 +0000 UTC m=+1308.615490094" watchObservedRunningTime="2025-11-28 21:11:09.163836594 +0000 UTC m=+1308.632484493" Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.210289 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerStarted","Data":"2e392aec32cd413913190e3be82201f9703c1b06c183b83d37e6c0c757bb158c"} Nov 28 21:11:09 crc kubenswrapper[4957]: E1128 21:11:09.261755 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-lm4b6" podUID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.557884 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t9fwz"] Nov 28 21:11:09 crc kubenswrapper[4957]: W1128 21:11:09.562256 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc03480_9a62_4730_a35b_7335deece98a.slice/crio-09d43eb2f552e394d1416e81781852a1745feb0ff8aaf3ccd5b05e076effceff WatchSource:0}: Error finding container 09d43eb2f552e394d1416e81781852a1745feb0ff8aaf3ccd5b05e076effceff: Status 404 returned error can't find the container with id 09d43eb2f552e394d1416e81781852a1745feb0ff8aaf3ccd5b05e076effceff Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.850494 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hdgrw" podUID="e2f92833-2316-4e8c-ae77-e50ee7b07d4f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Nov 28 21:11:09 crc kubenswrapper[4957]: I1128 21:11:09.940971 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:11:10 crc kubenswrapper[4957]: I1128 21:11:10.262539 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f066d22c-10b0-4ae4-8e14-0e99502ff8d6","Type":"ContainerStarted","Data":"19a0fcb575d82ad4795cf4395b1482b0e29995fb9515b8dfbe8ffeebbc49f3f3"} Nov 28 21:11:10 crc kubenswrapper[4957]: I1128 21:11:10.266814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33e364e1-d026-4648-b15c-8131dc797463","Type":"ContainerStarted","Data":"fae3f56e8c5315cbb94559d2793f822e84e023ce6e98010790c705d7dc808867"} Nov 28 21:11:10 crc kubenswrapper[4957]: I1128 21:11:10.269150 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" event={"ID":"fdc03480-9a62-4730-a35b-7335deece98a","Type":"ContainerStarted","Data":"09d43eb2f552e394d1416e81781852a1745feb0ff8aaf3ccd5b05e076effceff"} Nov 28 21:11:10 crc kubenswrapper[4957]: I1128 21:11:10.619802 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76cf686b44-wd445"] Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.045256 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d88444bf-br9dz"] Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.048105 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.051269 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.051483 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.079808 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d88444bf-br9dz"] Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.122667 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-ovndb-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.122809 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54x6\" (UniqueName: \"kubernetes.io/projected/f83754ad-9910-4042-9995-ca4dec9d9a29-kube-api-access-j54x6\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.122904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-public-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.122981 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-config\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.123034 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-httpd-config\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.123199 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-internal-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.123267 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-combined-ca-bundle\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.296983 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-internal-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.297088 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-combined-ca-bundle\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.297505 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-ovndb-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.297645 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j54x6\" (UniqueName: \"kubernetes.io/projected/f83754ad-9910-4042-9995-ca4dec9d9a29-kube-api-access-j54x6\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.300333 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-public-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.300485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-config\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.300537 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-httpd-config\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.305154 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-public-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.305243 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-ovndb-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.305374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-combined-ca-bundle\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.309374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-config\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.311272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-internal-tls-certs\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.313323 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f83754ad-9910-4042-9995-ca4dec9d9a29-httpd-config\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.315360 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j54x6\" (UniqueName: \"kubernetes.io/projected/f83754ad-9910-4042-9995-ca4dec9d9a29-kube-api-access-j54x6\") pod \"neutron-59d88444bf-br9dz\" (UID: \"f83754ad-9910-4042-9995-ca4dec9d9a29\") " pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.317897 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cf686b44-wd445" event={"ID":"172c2933-99bf-430b-bae8-2e66b1d8c0c0","Type":"ContainerStarted","Data":"649bfd69fec0adf02d30894586ad0679acec643886fbf82bdd84848c58486af0"} Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.317939 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cf686b44-wd445" event={"ID":"172c2933-99bf-430b-bae8-2e66b1d8c0c0","Type":"ContainerStarted","Data":"79c31151bf2b1c9ea85ba97e3f03c65183393b0ec93c3de9d362060a50e2a898"} Nov 28 21:11:11 crc kubenswrapper[4957]: I1128 21:11:11.427277 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.010307 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d88444bf-br9dz"] Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.329966 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33e364e1-d026-4648-b15c-8131dc797463","Type":"ContainerStarted","Data":"e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4"} Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.332235 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5gcld" event={"ID":"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1","Type":"ContainerStarted","Data":"196782afafab1ca6b58ee64f7754da79b615278bc94e58431d1eb5db52185e7f"} Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.336772 4957 generic.go:334] "Generic (PLEG): container finished" podID="fdc03480-9a62-4730-a35b-7335deece98a" containerID="8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5" exitCode=0 Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.336859 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" event={"ID":"fdc03480-9a62-4730-a35b-7335deece98a","Type":"ContainerDied","Data":"8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5"} Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.338952 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f066d22c-10b0-4ae4-8e14-0e99502ff8d6","Type":"ContainerStarted","Data":"31eeb8fa31a4eb4b006669c9c18aaa04124e0678df835b7e79586888328bb713"} Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.341784 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cf686b44-wd445" event={"ID":"172c2933-99bf-430b-bae8-2e66b1d8c0c0","Type":"ContainerStarted","Data":"c4f6936235945a71098c1bbfaab3a32faf3fedb9d6fa2a097b958360cffe85de"} Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.342247 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.344049 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d88444bf-br9dz" event={"ID":"f83754ad-9910-4042-9995-ca4dec9d9a29","Type":"ContainerStarted","Data":"742821e0de2f626ff180b561b91b38558ba506bde8468c7906788dcaea9bbf40"} Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.348971 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5gcld" podStartSLOduration=25.348958685 podStartE2EDuration="25.348958685s" podCreationTimestamp="2025-11-28 21:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:12.346520724 +0000 UTC m=+1311.815168633" watchObservedRunningTime="2025-11-28 21:11:12.348958685 +0000 UTC m=+1311.817606594" Nov 28 21:11:12 crc kubenswrapper[4957]: I1128 21:11:12.398256 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76cf686b44-wd445" podStartSLOduration=4.398238639 podStartE2EDuration="4.398238639s" podCreationTimestamp="2025-11-28 21:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:12.394878776 +0000 UTC m=+1311.863526685" watchObservedRunningTime="2025-11-28 21:11:12.398238639 +0000 UTC m=+1311.866886548" Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.355525 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7skhk" event={"ID":"8bcfaba7-030f-4415-b1c0-79820941039b","Type":"ContainerStarted","Data":"53e8138cc5582a73d87a78ce233f19b8b87c69596b872fffe143e3c864a3cb2e"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.358479 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerStarted","Data":"19dce4167d490c07875c84ad0d8387aa943adef0205cd020960713fd80a0aff6"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.359828 4957 generic.go:334] "Generic (PLEG): container finished" podID="9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" containerID="b4eec643e13f90868dfe11ab06c8c7c77004d3dd8d5555762d606ab2fb302d38" exitCode=0 Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.359864 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9dz9" event={"ID":"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb","Type":"ContainerDied","Data":"b4eec643e13f90868dfe11ab06c8c7c77004d3dd8d5555762d606ab2fb302d38"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.361651 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33e364e1-d026-4648-b15c-8131dc797463","Type":"ContainerStarted","Data":"d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.363033 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" event={"ID":"fdc03480-9a62-4730-a35b-7335deece98a","Type":"ContainerStarted","Data":"09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.363164 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.422662 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f066d22c-10b0-4ae4-8e14-0e99502ff8d6","Type":"ContainerStarted","Data":"5933658893a12aed819e1592d5775a5c22c4e2fe4131c71bf20c0e29d5068eae"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.428958 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-7skhk" podStartSLOduration=3.5631659989999998 podStartE2EDuration="35.428940708s" podCreationTimestamp="2025-11-28 21:10:38 +0000 UTC" firstStartedPulling="2025-11-28 21:10:40.972598755 +0000 UTC m=+1280.441246664" lastFinishedPulling="2025-11-28 21:11:12.838373464 +0000 UTC m=+1312.307021373" observedRunningTime="2025-11-28 21:11:13.4189016 +0000 UTC m=+1312.887549509" watchObservedRunningTime="2025-11-28 21:11:13.428940708 +0000 UTC m=+1312.897588617" Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.430153 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d88444bf-br9dz" event={"ID":"f83754ad-9910-4042-9995-ca4dec9d9a29","Type":"ContainerStarted","Data":"b6cba28c8111d7ef4e94cfca469e5c21688f285de65465ffd8ceacebd4262017"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.430231 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d88444bf-br9dz" event={"ID":"f83754ad-9910-4042-9995-ca4dec9d9a29","Type":"ContainerStarted","Data":"5917d65eba2c73ebdb62944d6a3c1f31653a298fd6ac52a92c61f9b95be82d57"} Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.476226 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.476194062 podStartE2EDuration="19.476194062s" podCreationTimestamp="2025-11-28 21:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:13.475838733 +0000 UTC m=+1312.944486802" watchObservedRunningTime="2025-11-28 21:11:13.476194062 +0000 UTC m=+1312.944841961" Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.508647 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" podStartSLOduration=5.508623291 podStartE2EDuration="5.508623291s" podCreationTimestamp="2025-11-28 21:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:13.4955797 +0000 UTC m=+1312.964227669" watchObservedRunningTime="2025-11-28 21:11:13.508623291 +0000 UTC m=+1312.977271210" Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.522835 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59d88444bf-br9dz" podStartSLOduration=2.52279009 podStartE2EDuration="2.52279009s" podCreationTimestamp="2025-11-28 21:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:13.51345171 +0000 UTC m=+1312.982099629" watchObservedRunningTime="2025-11-28 21:11:13.52279009 +0000 UTC m=+1312.991437999" Nov 28 21:11:13 crc kubenswrapper[4957]: I1128 21:11:13.539175 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.539160244 podStartE2EDuration="26.539160244s" podCreationTimestamp="2025-11-28 21:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:13.537268107 +0000 UTC m=+1313.005916026" watchObservedRunningTime="2025-11-28 21:11:13.539160244 +0000 UTC m=+1313.007808153" Nov 28 21:11:14 crc kubenswrapper[4957]: I1128 21:11:14.455647 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:14 crc kubenswrapper[4957]: I1128 21:11:14.932540 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9dz9" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.021330 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-logs\") pod \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.021379 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-config-data\") pod \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.021460 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-kube-api-access-8ppjz\") pod \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.021492 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-scripts\") pod \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.021568 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-combined-ca-bundle\") pod \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\" (UID: \"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb\") " Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.022164 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-logs" (OuterVolumeSpecName: "logs") pod "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" (UID: "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.038457 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-kube-api-access-8ppjz" (OuterVolumeSpecName: "kube-api-access-8ppjz") pod "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" (UID: "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb"). InnerVolumeSpecName "kube-api-access-8ppjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.045573 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-scripts" (OuterVolumeSpecName: "scripts") pod "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" (UID: "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.114885 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" (UID: "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.124114 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-kube-api-access-8ppjz\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.124156 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.124169 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.124180 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.126870 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-config-data" (OuterVolumeSpecName: "config-data") pod "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" (UID: "9eef89ff-3725-4c4b-8b08-1e1a6f6369cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.226392 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.343443 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.343489 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.404184 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.404626 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.473285 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9dz9" event={"ID":"9eef89ff-3725-4c4b-8b08-1e1a6f6369cb","Type":"ContainerDied","Data":"ed32afdaa191e764ed98c35ccdee20a125ac2f9bade96bc7da2012716607d774"} Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.473395 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed32afdaa191e764ed98c35ccdee20a125ac2f9bade96bc7da2012716607d774" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.473524 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9dz9" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.474358 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.474387 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.527293 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-956cd8448-hm6cs"] Nov 28 21:11:15 crc kubenswrapper[4957]: E1128 21:11:15.527734 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" containerName="placement-db-sync" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.527754 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" containerName="placement-db-sync" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.527940 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" containerName="placement-db-sync" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.532494 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.535918 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.535987 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.536118 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.536153 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.536256 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xx8qj" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.548701 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-956cd8448-hm6cs"] Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.635979 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-combined-ca-bundle\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.636021 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-internal-tls-certs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.636047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7g88\" (UniqueName: \"kubernetes.io/projected/271d8bc3-c837-4768-b8de-6b185bfa2659-kube-api-access-r7g88\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.636200 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-scripts\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.636606 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-public-tls-certs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.636753 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/271d8bc3-c837-4768-b8de-6b185bfa2659-logs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.636984 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-config-data\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.738511 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-combined-ca-bundle\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.738554 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-internal-tls-certs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.738931 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7g88\" (UniqueName: \"kubernetes.io/projected/271d8bc3-c837-4768-b8de-6b185bfa2659-kube-api-access-r7g88\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.738968 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-scripts\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.739040 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-public-tls-certs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.739080 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/271d8bc3-c837-4768-b8de-6b185bfa2659-logs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.739131 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-config-data\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.739942 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/271d8bc3-c837-4768-b8de-6b185bfa2659-logs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.742723 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-config-data\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.742983 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-internal-tls-certs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.744358 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-public-tls-certs\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.746855 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-combined-ca-bundle\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.748417 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/271d8bc3-c837-4768-b8de-6b185bfa2659-scripts\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.756246 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7g88\" (UniqueName: \"kubernetes.io/projected/271d8bc3-c837-4768-b8de-6b185bfa2659-kube-api-access-r7g88\") pod \"placement-956cd8448-hm6cs\" (UID: \"271d8bc3-c837-4768-b8de-6b185bfa2659\") " pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:15 crc kubenswrapper[4957]: I1128 21:11:15.860941 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:16 crc kubenswrapper[4957]: I1128 21:11:16.391834 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-956cd8448-hm6cs"] Nov 28 21:11:16 crc kubenswrapper[4957]: I1128 21:11:16.483751 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-956cd8448-hm6cs" event={"ID":"271d8bc3-c837-4768-b8de-6b185bfa2659","Type":"ContainerStarted","Data":"70d0483e0af2c12562314b92cbeb701e131cf23d00529253e0422c8e3d70559a"} Nov 28 21:11:17 crc kubenswrapper[4957]: I1128 21:11:17.494193 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-956cd8448-hm6cs" event={"ID":"271d8bc3-c837-4768-b8de-6b185bfa2659","Type":"ContainerStarted","Data":"95bb913abfecdb30a391e6ebd8e57fc4b128fd64130286174daf6e18e9ae8ffa"} Nov 28 21:11:17 crc kubenswrapper[4957]: I1128 21:11:17.495717 4957 generic.go:334] "Generic (PLEG): container finished" podID="1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" containerID="196782afafab1ca6b58ee64f7754da79b615278bc94e58431d1eb5db52185e7f" exitCode=0 Nov 28 21:11:17 crc kubenswrapper[4957]: I1128 21:11:17.495769 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5gcld" event={"ID":"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1","Type":"ContainerDied","Data":"196782afafab1ca6b58ee64f7754da79b615278bc94e58431d1eb5db52185e7f"} Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.302388 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.302735 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.302747 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.302757 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.330574 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.347033 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.513562 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-956cd8448-hm6cs" event={"ID":"271d8bc3-c837-4768-b8de-6b185bfa2659","Type":"ContainerStarted","Data":"963464c841644f49b82a9aa0da84b27f9beb1b034bfcc694b0dbb8a9b569612a"} Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.514698 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.550150 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-956cd8448-hm6cs" podStartSLOduration=3.550130428 podStartE2EDuration="3.550130428s" podCreationTimestamp="2025-11-28 21:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:18.544354936 +0000 UTC m=+1318.013002835" watchObservedRunningTime="2025-11-28 21:11:18.550130428 +0000 UTC m=+1318.018778327" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.861452 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.915369 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ff8lc"] Nov 28 21:11:18 crc kubenswrapper[4957]: I1128 21:11:18.915611 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="dnsmasq-dns" containerID="cri-o://5cd569d7535acc4d423ce00d03a4e12e51ccc11b41ea84b273629b5c20c7bc96" gracePeriod=10 Nov 28 21:11:19 crc kubenswrapper[4957]: I1128 21:11:19.524762 4957 generic.go:334] "Generic (PLEG): container finished" podID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerID="5cd569d7535acc4d423ce00d03a4e12e51ccc11b41ea84b273629b5c20c7bc96" exitCode=0 Nov 28 21:11:19 crc kubenswrapper[4957]: I1128 21:11:19.526195 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" event={"ID":"6a202627-5f8e-4fc1-a99f-741e57e7e973","Type":"ContainerDied","Data":"5cd569d7535acc4d423ce00d03a4e12e51ccc11b41ea84b273629b5c20c7bc96"} Nov 28 21:11:19 crc kubenswrapper[4957]: I1128 21:11:19.526417 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:19 crc kubenswrapper[4957]: I1128 21:11:19.950667 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Nov 28 21:11:20 crc kubenswrapper[4957]: I1128 21:11:20.577797 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 21:11:20 crc kubenswrapper[4957]: I1128 21:11:20.781731 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:20 crc kubenswrapper[4957]: I1128 21:11:20.781848 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 21:11:20 crc kubenswrapper[4957]: I1128 21:11:20.784192 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 21:11:21 crc kubenswrapper[4957]: I1128 21:11:21.981894 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.115681 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-config-data\") pod \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.115987 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-fernet-keys\") pod \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.116908 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-scripts\") pod \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.116955 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jsz\" (UniqueName: \"kubernetes.io/projected/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-kube-api-access-44jsz\") pod \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.117657 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-credential-keys\") pod \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.117697 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-combined-ca-bundle\") pod \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\" (UID: \"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.122057 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" (UID: "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.122977 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-kube-api-access-44jsz" (OuterVolumeSpecName: "kube-api-access-44jsz") pod "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" (UID: "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1"). InnerVolumeSpecName "kube-api-access-44jsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.125020 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-scripts" (OuterVolumeSpecName: "scripts") pod "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" (UID: "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.126465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" (UID: "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.155399 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-config-data" (OuterVolumeSpecName: "config-data") pod "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" (UID: "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.183276 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" (UID: "1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.219583 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.219615 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jsz\" (UniqueName: \"kubernetes.io/projected/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-kube-api-access-44jsz\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.219628 4957 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.219638 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.219648 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.219657 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.263538 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.422999 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-nb\") pod \"6a202627-5f8e-4fc1-a99f-741e57e7e973\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.423139 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9799j\" (UniqueName: \"kubernetes.io/projected/6a202627-5f8e-4fc1-a99f-741e57e7e973-kube-api-access-9799j\") pod \"6a202627-5f8e-4fc1-a99f-741e57e7e973\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.423693 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-svc\") pod \"6a202627-5f8e-4fc1-a99f-741e57e7e973\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.423762 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-swift-storage-0\") pod \"6a202627-5f8e-4fc1-a99f-741e57e7e973\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.423819 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-sb\") pod \"6a202627-5f8e-4fc1-a99f-741e57e7e973\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.423911 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-config\") pod \"6a202627-5f8e-4fc1-a99f-741e57e7e973\" (UID: \"6a202627-5f8e-4fc1-a99f-741e57e7e973\") " Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.427018 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a202627-5f8e-4fc1-a99f-741e57e7e973-kube-api-access-9799j" (OuterVolumeSpecName: "kube-api-access-9799j") pod "6a202627-5f8e-4fc1-a99f-741e57e7e973" (UID: "6a202627-5f8e-4fc1-a99f-741e57e7e973"). InnerVolumeSpecName "kube-api-access-9799j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.484130 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6a202627-5f8e-4fc1-a99f-741e57e7e973" (UID: "6a202627-5f8e-4fc1-a99f-741e57e7e973"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.484149 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a202627-5f8e-4fc1-a99f-741e57e7e973" (UID: "6a202627-5f8e-4fc1-a99f-741e57e7e973"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.485985 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a202627-5f8e-4fc1-a99f-741e57e7e973" (UID: "6a202627-5f8e-4fc1-a99f-741e57e7e973"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.487875 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-config" (OuterVolumeSpecName: "config") pod "6a202627-5f8e-4fc1-a99f-741e57e7e973" (UID: "6a202627-5f8e-4fc1-a99f-741e57e7e973"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.499752 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a202627-5f8e-4fc1-a99f-741e57e7e973" (UID: "6a202627-5f8e-4fc1-a99f-741e57e7e973"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.526184 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.526227 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.526237 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.526246 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9799j\" (UniqueName: \"kubernetes.io/projected/6a202627-5f8e-4fc1-a99f-741e57e7e973-kube-api-access-9799j\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.526254 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.526263 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a202627-5f8e-4fc1-a99f-741e57e7e973-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.558175 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" event={"ID":"6a202627-5f8e-4fc1-a99f-741e57e7e973","Type":"ContainerDied","Data":"52cbc095ecfab445ffbdb5bbfccc0197023b72c1de1e180f77586617a52280be"} Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.558188 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-ff8lc" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.558266 4957 scope.go:117] "RemoveContainer" containerID="5cd569d7535acc4d423ce00d03a4e12e51ccc11b41ea84b273629b5c20c7bc96" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.561460 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerStarted","Data":"85d175b18493bc929f2c95709a48eb69f7434d802d92fb1ac4035467f5cee6a4"} Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.565014 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnxk5" event={"ID":"1717eaea-018a-4e9d-af82-ce7b3fb3868e","Type":"ContainerStarted","Data":"a1f1280c6d4d31a4f3bbbad3616c9ae987c0b981c04b4bac1d6f2c581e298576"} Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.574101 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5gcld" event={"ID":"1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1","Type":"ContainerDied","Data":"5d2b7ace8ffdb30210195ec7e5c55029c5ee31c10f49d6d0a4dd7b999df2649a"} Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.574141 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d2b7ace8ffdb30210195ec7e5c55029c5ee31c10f49d6d0a4dd7b999df2649a" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.574196 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5gcld" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.591590 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wnxk5" podStartSLOduration=3.355709969 podStartE2EDuration="43.591570451s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="2025-11-28 21:10:41.729994909 +0000 UTC m=+1281.198642818" lastFinishedPulling="2025-11-28 21:11:21.965855391 +0000 UTC m=+1321.434503300" observedRunningTime="2025-11-28 21:11:22.580961429 +0000 UTC m=+1322.049609348" watchObservedRunningTime="2025-11-28 21:11:22.591570451 +0000 UTC m=+1322.060218360" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.611674 4957 scope.go:117] "RemoveContainer" containerID="2ae8e9823ad342a1a3c6c272ebd4a9ad8119ef546e0bdf93a607651264c30b7b" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.634799 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ff8lc"] Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.672659 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-ff8lc"] Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.827075 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" path="/var/lib/kubelet/pods/6a202627-5f8e-4fc1-a99f-741e57e7e973/volumes" Nov 28 21:11:22 crc kubenswrapper[4957]: I1128 21:11:22.880415 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.163196 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7859c96b89-s4dx8"] Nov 28 21:11:23 crc kubenswrapper[4957]: E1128 21:11:23.164148 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="dnsmasq-dns" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.164172 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="dnsmasq-dns" Nov 28 21:11:23 crc kubenswrapper[4957]: E1128 21:11:23.164191 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" containerName="keystone-bootstrap" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.164202 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" containerName="keystone-bootstrap" Nov 28 21:11:23 crc kubenswrapper[4957]: E1128 21:11:23.164250 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="init" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.164258 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="init" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.164651 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" containerName="keystone-bootstrap" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.164691 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a202627-5f8e-4fc1-a99f-741e57e7e973" containerName="dnsmasq-dns" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.165752 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.168186 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zhjp" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.168577 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.169091 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.169373 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.169812 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.174510 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.174623 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7859c96b89-s4dx8"] Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.253189 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-fernet-keys\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.253772 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-config-data\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.253832 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-credential-keys\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.253881 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-combined-ca-bundle\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.253926 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-internal-tls-certs\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.253964 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-scripts\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.254020 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-public-tls-certs\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.254052 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbjt\" (UniqueName: \"kubernetes.io/projected/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-kube-api-access-dvbjt\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-fernet-keys\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357131 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-config-data\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357221 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-credential-keys\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-combined-ca-bundle\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357337 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-internal-tls-certs\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357377 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-scripts\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357424 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-public-tls-certs\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.357455 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbjt\" (UniqueName: \"kubernetes.io/projected/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-kube-api-access-dvbjt\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.364343 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-internal-tls-certs\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.368309 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-public-tls-certs\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.368895 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-fernet-keys\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.369534 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-scripts\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.373773 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-credential-keys\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.377495 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-combined-ca-bundle\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.377662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbjt\" (UniqueName: \"kubernetes.io/projected/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-kube-api-access-dvbjt\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.379586 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6-config-data\") pod \"keystone-7859c96b89-s4dx8\" (UID: \"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6\") " pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:23 crc kubenswrapper[4957]: I1128 21:11:23.491385 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:24 crc kubenswrapper[4957]: I1128 21:11:24.030069 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7859c96b89-s4dx8"] Nov 28 21:11:24 crc kubenswrapper[4957]: W1128 21:11:24.066530 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ba53d5f_f252_4bd9_b9ac_83b26ffaa9b6.slice/crio-9088b052afb0cf3eb4d85651c5431332612713318181f6d5ae3179b29fb510e3 WatchSource:0}: Error finding container 9088b052afb0cf3eb4d85651c5431332612713318181f6d5ae3179b29fb510e3: Status 404 returned error can't find the container with id 9088b052afb0cf3eb4d85651c5431332612713318181f6d5ae3179b29fb510e3 Nov 28 21:11:24 crc kubenswrapper[4957]: I1128 21:11:24.630920 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7859c96b89-s4dx8" event={"ID":"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6","Type":"ContainerStarted","Data":"8e5d752cf75c00da34420a909a6c9efbec2e6f9b67150d4e48e6d15dac796b32"} Nov 28 21:11:24 crc kubenswrapper[4957]: I1128 21:11:24.631299 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7859c96b89-s4dx8" event={"ID":"1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6","Type":"ContainerStarted","Data":"9088b052afb0cf3eb4d85651c5431332612713318181f6d5ae3179b29fb510e3"} Nov 28 21:11:24 crc kubenswrapper[4957]: I1128 21:11:24.631439 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:24 crc kubenswrapper[4957]: I1128 21:11:24.660792 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7859c96b89-s4dx8" podStartSLOduration=1.660774871 podStartE2EDuration="1.660774871s" podCreationTimestamp="2025-11-28 21:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:24.644267824 +0000 UTC m=+1324.112915733" watchObservedRunningTime="2025-11-28 21:11:24.660774871 +0000 UTC m=+1324.129422780" Nov 28 21:11:25 crc kubenswrapper[4957]: I1128 21:11:25.647133 4957 generic.go:334] "Generic (PLEG): container finished" podID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" containerID="a1f1280c6d4d31a4f3bbbad3616c9ae987c0b981c04b4bac1d6f2c581e298576" exitCode=0 Nov 28 21:11:25 crc kubenswrapper[4957]: I1128 21:11:25.647240 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnxk5" event={"ID":"1717eaea-018a-4e9d-af82-ce7b3fb3868e","Type":"ContainerDied","Data":"a1f1280c6d4d31a4f3bbbad3616c9ae987c0b981c04b4bac1d6f2c581e298576"} Nov 28 21:11:25 crc kubenswrapper[4957]: I1128 21:11:25.655605 4957 generic.go:334] "Generic (PLEG): container finished" podID="8bcfaba7-030f-4415-b1c0-79820941039b" containerID="53e8138cc5582a73d87a78ce233f19b8b87c69596b872fffe143e3c864a3cb2e" exitCode=0 Nov 28 21:11:25 crc kubenswrapper[4957]: I1128 21:11:25.655655 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7skhk" event={"ID":"8bcfaba7-030f-4415-b1c0-79820941039b","Type":"ContainerDied","Data":"53e8138cc5582a73d87a78ce233f19b8b87c69596b872fffe143e3c864a3cb2e"} Nov 28 21:11:25 crc kubenswrapper[4957]: I1128 21:11:25.660004 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lm4b6" event={"ID":"eb8d4ba5-28bb-41f2-8158-04d673e8ee19","Type":"ContainerStarted","Data":"40445a3e8247a16e215abebcf0ba9c57cb3ce924c3e64ea2fe3db36c49e5eaf3"} Nov 28 21:11:25 crc kubenswrapper[4957]: I1128 21:11:25.702350 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lm4b6" podStartSLOduration=3.220777904 podStartE2EDuration="46.702333108s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="2025-11-28 21:10:40.990686391 +0000 UTC m=+1280.459334300" lastFinishedPulling="2025-11-28 21:11:24.472241595 +0000 UTC m=+1323.940889504" observedRunningTime="2025-11-28 21:11:25.701037036 +0000 UTC m=+1325.169684945" watchObservedRunningTime="2025-11-28 21:11:25.702333108 +0000 UTC m=+1325.170981017" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.568678 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7skhk" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.591806 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.693181 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-config-data\") pod \"8bcfaba7-030f-4415-b1c0-79820941039b\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.693249 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-combined-ca-bundle\") pod \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.693489 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqsdb\" (UniqueName: \"kubernetes.io/projected/1717eaea-018a-4e9d-af82-ce7b3fb3868e-kube-api-access-dqsdb\") pod \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.693553 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-db-sync-config-data\") pod \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\" (UID: \"1717eaea-018a-4e9d-af82-ce7b3fb3868e\") " Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.693586 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-combined-ca-bundle\") pod \"8bcfaba7-030f-4415-b1c0-79820941039b\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.693625 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhc8m\" (UniqueName: \"kubernetes.io/projected/8bcfaba7-030f-4415-b1c0-79820941039b-kube-api-access-rhc8m\") pod \"8bcfaba7-030f-4415-b1c0-79820941039b\" (UID: \"8bcfaba7-030f-4415-b1c0-79820941039b\") " Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.699379 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcfaba7-030f-4415-b1c0-79820941039b-kube-api-access-rhc8m" (OuterVolumeSpecName: "kube-api-access-rhc8m") pod "8bcfaba7-030f-4415-b1c0-79820941039b" (UID: "8bcfaba7-030f-4415-b1c0-79820941039b"). InnerVolumeSpecName "kube-api-access-rhc8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.700257 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1717eaea-018a-4e9d-af82-ce7b3fb3868e-kube-api-access-dqsdb" (OuterVolumeSpecName: "kube-api-access-dqsdb") pod "1717eaea-018a-4e9d-af82-ce7b3fb3868e" (UID: "1717eaea-018a-4e9d-af82-ce7b3fb3868e"). InnerVolumeSpecName "kube-api-access-dqsdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.705137 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7skhk" event={"ID":"8bcfaba7-030f-4415-b1c0-79820941039b","Type":"ContainerDied","Data":"616f113bb64fe4fd85dec432f17e8bb9482c186690eab9524bebdbf24051f5a4"} Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.705179 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="616f113bb64fe4fd85dec432f17e8bb9482c186690eab9524bebdbf24051f5a4" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.705200 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7skhk" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.706350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnxk5" event={"ID":"1717eaea-018a-4e9d-af82-ce7b3fb3868e","Type":"ContainerDied","Data":"adc4cfcf3bdef0807a2fa24fabe2d6e44be4eafcafdfd3174e161cda966480fd"} Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.706375 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc4cfcf3bdef0807a2fa24fabe2d6e44be4eafcafdfd3174e161cda966480fd" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.706426 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnxk5" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.712872 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1717eaea-018a-4e9d-af82-ce7b3fb3868e" (UID: "1717eaea-018a-4e9d-af82-ce7b3fb3868e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.726009 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bcfaba7-030f-4415-b1c0-79820941039b" (UID: "8bcfaba7-030f-4415-b1c0-79820941039b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.745193 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1717eaea-018a-4e9d-af82-ce7b3fb3868e" (UID: "1717eaea-018a-4e9d-af82-ce7b3fb3868e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.792058 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-config-data" (OuterVolumeSpecName: "config-data") pod "8bcfaba7-030f-4415-b1c0-79820941039b" (UID: "8bcfaba7-030f-4415-b1c0-79820941039b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.798244 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.798273 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.798285 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqsdb\" (UniqueName: \"kubernetes.io/projected/1717eaea-018a-4e9d-af82-ce7b3fb3868e-kube-api-access-dqsdb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.798294 4957 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1717eaea-018a-4e9d-af82-ce7b3fb3868e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.798304 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcfaba7-030f-4415-b1c0-79820941039b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:29 crc kubenswrapper[4957]: I1128 21:11:29.798313 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhc8m\" (UniqueName: \"kubernetes.io/projected/8bcfaba7-030f-4415-b1c0-79820941039b-kube-api-access-rhc8m\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.908225 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-768b76c799-g58ls"] Nov 28 21:11:30 crc kubenswrapper[4957]: E1128 21:11:30.909107 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcfaba7-030f-4415-b1c0-79820941039b" containerName="heat-db-sync" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.909128 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcfaba7-030f-4415-b1c0-79820941039b" containerName="heat-db-sync" Nov 28 21:11:30 crc kubenswrapper[4957]: E1128 21:11:30.909150 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" containerName="barbican-db-sync" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.909165 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" containerName="barbican-db-sync" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.909496 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" containerName="barbican-db-sync" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.909522 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcfaba7-030f-4415-b1c0-79820941039b" containerName="heat-db-sync" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.911013 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.917928 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.918169 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.918446 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-txcbx" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.929497 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d9bff56f6-ghr8m"] Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.931810 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.942267 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.949188 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d9bff56f6-ghr8m"] Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.964474 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-768b76c799-g58ls"] Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.990380 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87c6s"] Nov 28 21:11:30 crc kubenswrapper[4957]: I1128 21:11:30.992508 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.002541 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87c6s"] Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027078 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nl6\" (UniqueName: \"kubernetes.io/projected/7e932e95-e1e7-4894-82fe-de55d3f79981-kube-api-access-p8nl6\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027119 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ww2l\" (UniqueName: \"kubernetes.io/projected/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-kube-api-access-8ww2l\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027183 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-logs\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027206 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-config\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027261 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-config-data-custom\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027299 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027331 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-config-data\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027350 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-combined-ca-bundle\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027367 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-combined-ca-bundle\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027382 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba969ad-04d0-4a30-946d-995723ab4041-logs\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027425 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027458 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-config-data-custom\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027485 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmlrj\" (UniqueName: \"kubernetes.io/projected/cba969ad-04d0-4a30-946d-995723ab4041-kube-api-access-jmlrj\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027508 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-svc\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.027546 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-config-data\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.070938 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8546f4f854-p9lnp"] Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.074778 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.076903 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.085332 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8546f4f854-p9lnp"] Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129741 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129824 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-config-data\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129863 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-combined-ca-bundle\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129891 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-combined-ca-bundle\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129921 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba969ad-04d0-4a30-946d-995723ab4041-logs\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129963 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.129994 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-config-data-custom\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130020 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmlrj\" (UniqueName: \"kubernetes.io/projected/cba969ad-04d0-4a30-946d-995723ab4041-kube-api-access-jmlrj\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130044 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-svc\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130089 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130114 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data-custom\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130137 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-config-data\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130236 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-combined-ca-bundle\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130300 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nl6\" (UniqueName: \"kubernetes.io/projected/7e932e95-e1e7-4894-82fe-de55d3f79981-kube-api-access-p8nl6\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130342 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ww2l\" (UniqueName: \"kubernetes.io/projected/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-kube-api-access-8ww2l\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130387 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130422 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlxl\" (UniqueName: \"kubernetes.io/projected/620c4ab0-7219-4cca-893e-e1e26bc9a927-kube-api-access-fhlxl\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130458 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-logs\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130490 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-config\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130513 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-config-data-custom\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130564 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620c4ab0-7219-4cca-893e-e1e26bc9a927-logs\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.130726 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.131714 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cba969ad-04d0-4a30-946d-995723ab4041-logs\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.131747 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.132872 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-logs\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.133045 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.133445 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-config\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.133994 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-svc\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.136678 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-config-data-custom\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.137416 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-config-data-custom\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.138145 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-config-data\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.146911 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-combined-ca-bundle\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.147582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-combined-ca-bundle\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.148406 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba969ad-04d0-4a30-946d-995723ab4041-config-data\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.153855 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nl6\" (UniqueName: \"kubernetes.io/projected/7e932e95-e1e7-4894-82fe-de55d3f79981-kube-api-access-p8nl6\") pod \"dnsmasq-dns-85ff748b95-87c6s\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.162473 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ww2l\" (UniqueName: \"kubernetes.io/projected/d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc-kube-api-access-8ww2l\") pod \"barbican-worker-768b76c799-g58ls\" (UID: \"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc\") " pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.170865 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmlrj\" (UniqueName: \"kubernetes.io/projected/cba969ad-04d0-4a30-946d-995723ab4041-kube-api-access-jmlrj\") pod \"barbican-keystone-listener-5d9bff56f6-ghr8m\" (UID: \"cba969ad-04d0-4a30-946d-995723ab4041\") " pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.232457 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlxl\" (UniqueName: \"kubernetes.io/projected/620c4ab0-7219-4cca-893e-e1e26bc9a927-kube-api-access-fhlxl\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.232538 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620c4ab0-7219-4cca-893e-e1e26bc9a927-logs\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.232640 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.232660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data-custom\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.232708 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-combined-ca-bundle\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.233057 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620c4ab0-7219-4cca-893e-e1e26bc9a927-logs\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.237096 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-combined-ca-bundle\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.241170 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data-custom\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.241940 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.251162 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlxl\" (UniqueName: \"kubernetes.io/projected/620c4ab0-7219-4cca-893e-e1e26bc9a927-kube-api-access-fhlxl\") pod \"barbican-api-8546f4f854-p9lnp\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.276946 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-768b76c799-g58ls" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.300458 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.327147 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.395879 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.753814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerStarted","Data":"eaf0d1da6584bd28d57138949347bc0ecce364690fda79c596acb838bf9a031c"} Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.754371 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-central-agent" containerID="cri-o://2e392aec32cd413913190e3be82201f9703c1b06c183b83d37e6c0c757bb158c" gracePeriod=30 Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.754742 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.754940 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="proxy-httpd" containerID="cri-o://eaf0d1da6584bd28d57138949347bc0ecce364690fda79c596acb838bf9a031c" gracePeriod=30 Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.755068 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-notification-agent" containerID="cri-o://19dce4167d490c07875c84ad0d8387aa943adef0205cd020960713fd80a0aff6" gracePeriod=30 Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.755130 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="sg-core" containerID="cri-o://85d175b18493bc929f2c95709a48eb69f7434d802d92fb1ac4035467f5cee6a4" gracePeriod=30 Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.812832 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.851942397 podStartE2EDuration="52.812810737s" podCreationTimestamp="2025-11-28 21:10:39 +0000 UTC" firstStartedPulling="2025-11-28 21:10:41.811551849 +0000 UTC m=+1281.280199758" lastFinishedPulling="2025-11-28 21:11:30.772420189 +0000 UTC m=+1330.241068098" observedRunningTime="2025-11-28 21:11:31.780821199 +0000 UTC m=+1331.249469108" watchObservedRunningTime="2025-11-28 21:11:31.812810737 +0000 UTC m=+1331.281458646" Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.878279 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-768b76c799-g58ls"] Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.917587 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d9bff56f6-ghr8m"] Nov 28 21:11:31 crc kubenswrapper[4957]: I1128 21:11:31.970563 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87c6s"] Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.201597 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8546f4f854-p9lnp"] Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.772863 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" event={"ID":"cba969ad-04d0-4a30-946d-995723ab4041","Type":"ContainerStarted","Data":"da1a9f3a975854ae0b5d4c0905706f6a24964117de541f8df4a69aedbe2e6b7a"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.776054 4957 generic.go:334] "Generic (PLEG): container finished" podID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerID="525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf" exitCode=0 Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.776153 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" event={"ID":"7e932e95-e1e7-4894-82fe-de55d3f79981","Type":"ContainerDied","Data":"525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.776251 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" event={"ID":"7e932e95-e1e7-4894-82fe-de55d3f79981","Type":"ContainerStarted","Data":"9cda1ac5718feeeee74ebdb3409041b1771cbd69b1d0c3347482a6f8efb441eb"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.777572 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-768b76c799-g58ls" event={"ID":"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc","Type":"ContainerStarted","Data":"4684b741a76139d87b06e3bb3933a3e33aea8475a230dbd6f3189500d1a9c82a"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.782405 4957 generic.go:334] "Generic (PLEG): container finished" podID="f693abe9-5b02-4359-8522-bc89360df2b0" containerID="eaf0d1da6584bd28d57138949347bc0ecce364690fda79c596acb838bf9a031c" exitCode=0 Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.782430 4957 generic.go:334] "Generic (PLEG): container finished" podID="f693abe9-5b02-4359-8522-bc89360df2b0" containerID="85d175b18493bc929f2c95709a48eb69f7434d802d92fb1ac4035467f5cee6a4" exitCode=2 Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.782439 4957 generic.go:334] "Generic (PLEG): container finished" podID="f693abe9-5b02-4359-8522-bc89360df2b0" containerID="2e392aec32cd413913190e3be82201f9703c1b06c183b83d37e6c0c757bb158c" exitCode=0 Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.782493 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerDied","Data":"eaf0d1da6584bd28d57138949347bc0ecce364690fda79c596acb838bf9a031c"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.782531 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerDied","Data":"85d175b18493bc929f2c95709a48eb69f7434d802d92fb1ac4035467f5cee6a4"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.782549 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerDied","Data":"2e392aec32cd413913190e3be82201f9703c1b06c183b83d37e6c0c757bb158c"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.786720 4957 generic.go:334] "Generic (PLEG): container finished" podID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" containerID="40445a3e8247a16e215abebcf0ba9c57cb3ce924c3e64ea2fe3db36c49e5eaf3" exitCode=0 Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.786776 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lm4b6" event={"ID":"eb8d4ba5-28bb-41f2-8158-04d673e8ee19","Type":"ContainerDied","Data":"40445a3e8247a16e215abebcf0ba9c57cb3ce924c3e64ea2fe3db36c49e5eaf3"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.791282 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8546f4f854-p9lnp" event={"ID":"620c4ab0-7219-4cca-893e-e1e26bc9a927","Type":"ContainerStarted","Data":"9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.791309 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8546f4f854-p9lnp" event={"ID":"620c4ab0-7219-4cca-893e-e1e26bc9a927","Type":"ContainerStarted","Data":"fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.791319 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8546f4f854-p9lnp" event={"ID":"620c4ab0-7219-4cca-893e-e1e26bc9a927","Type":"ContainerStarted","Data":"ab159917fa8dbdd70fdbef68dd9015f1518215c93f0dbe5929909ac6455301ef"} Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.791529 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:32 crc kubenswrapper[4957]: I1128 21:11:32.791561 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.361161 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8546f4f854-p9lnp" podStartSLOduration=3.361143805 podStartE2EDuration="3.361143805s" podCreationTimestamp="2025-11-28 21:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:32.84849928 +0000 UTC m=+1332.317147189" watchObservedRunningTime="2025-11-28 21:11:34.361143805 +0000 UTC m=+1333.829791714" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.376680 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-68cfc84946-tg8qn"] Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.379553 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.384501 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.384912 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.424517 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68cfc84946-tg8qn"] Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-public-tls-certs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-combined-ca-bundle\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553405 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-internal-tls-certs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553589 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x7vf\" (UniqueName: \"kubernetes.io/projected/68ad8aac-7884-41b1-8f60-68a111f04c11-kube-api-access-7x7vf\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553698 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-config-data\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553760 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ad8aac-7884-41b1-8f60-68a111f04c11-logs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.553792 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-config-data-custom\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.654609 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-public-tls-certs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.654673 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-combined-ca-bundle\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.654697 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-internal-tls-certs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.654805 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x7vf\" (UniqueName: \"kubernetes.io/projected/68ad8aac-7884-41b1-8f60-68a111f04c11-kube-api-access-7x7vf\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.654874 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-config-data\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.654927 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-config-data-custom\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.655029 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ad8aac-7884-41b1-8f60-68a111f04c11-logs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.655454 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ad8aac-7884-41b1-8f60-68a111f04c11-logs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.660028 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-config-data-custom\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.660179 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-public-tls-certs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.660374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-config-data\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.662523 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-combined-ca-bundle\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.664494 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad8aac-7884-41b1-8f60-68a111f04c11-internal-tls-certs\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.672145 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x7vf\" (UniqueName: \"kubernetes.io/projected/68ad8aac-7884-41b1-8f60-68a111f04c11-kube-api-access-7x7vf\") pod \"barbican-api-68cfc84946-tg8qn\" (UID: \"68ad8aac-7884-41b1-8f60-68a111f04c11\") " pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.709840 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.736940 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.755629 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-config-data\") pod \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.756671 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-combined-ca-bundle\") pod \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.756779 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-scripts\") pod \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.756965 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w68g\" (UniqueName: \"kubernetes.io/projected/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-kube-api-access-5w68g\") pod \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.757075 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-etc-machine-id\") pod \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.757240 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-db-sync-config-data\") pod \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\" (UID: \"eb8d4ba5-28bb-41f2-8158-04d673e8ee19\") " Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.757569 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb8d4ba5-28bb-41f2-8158-04d673e8ee19" (UID: "eb8d4ba5-28bb-41f2-8158-04d673e8ee19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.757984 4957 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.763943 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-kube-api-access-5w68g" (OuterVolumeSpecName: "kube-api-access-5w68g") pod "eb8d4ba5-28bb-41f2-8158-04d673e8ee19" (UID: "eb8d4ba5-28bb-41f2-8158-04d673e8ee19"). InnerVolumeSpecName "kube-api-access-5w68g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.764662 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb8d4ba5-28bb-41f2-8158-04d673e8ee19" (UID: "eb8d4ba5-28bb-41f2-8158-04d673e8ee19"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.766141 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-scripts" (OuterVolumeSpecName: "scripts") pod "eb8d4ba5-28bb-41f2-8158-04d673e8ee19" (UID: "eb8d4ba5-28bb-41f2-8158-04d673e8ee19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.826552 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb8d4ba5-28bb-41f2-8158-04d673e8ee19" (UID: "eb8d4ba5-28bb-41f2-8158-04d673e8ee19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.862270 4957 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.862320 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.862341 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.862363 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w68g\" (UniqueName: \"kubernetes.io/projected/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-kube-api-access-5w68g\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.950725 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-config-data" (OuterVolumeSpecName: "config-data") pod "eb8d4ba5-28bb-41f2-8158-04d673e8ee19" (UID: "eb8d4ba5-28bb-41f2-8158-04d673e8ee19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.964659 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8d4ba5-28bb-41f2-8158-04d673e8ee19-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.973814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" event={"ID":"7e932e95-e1e7-4894-82fe-de55d3f79981","Type":"ContainerStarted","Data":"fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c"} Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.975363 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.978773 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-768b76c799-g58ls" event={"ID":"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc","Type":"ContainerStarted","Data":"bc2372c619090bf28d425edc82dae446a8716a88b87e88c0a828bb3e81bd5bfd"} Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.990194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lm4b6" event={"ID":"eb8d4ba5-28bb-41f2-8158-04d673e8ee19","Type":"ContainerDied","Data":"4908f83b174af36fcd2b1c1bcea5d570c39c0ac08dc6da1f829c0246d6d0b817"} Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.990255 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4908f83b174af36fcd2b1c1bcea5d570c39c0ac08dc6da1f829c0246d6d0b817" Nov 28 21:11:34 crc kubenswrapper[4957]: I1128 21:11:34.990759 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lm4b6" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.018574 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" event={"ID":"cba969ad-04d0-4a30-946d-995723ab4041","Type":"ContainerStarted","Data":"e9f721cb852d32fc746e0df7aa32022734eb4c2581b60619d3c7b5d9710f780c"} Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.021371 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" podStartSLOduration=5.021350825 podStartE2EDuration="5.021350825s" podCreationTimestamp="2025-11-28 21:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:35.019357975 +0000 UTC m=+1334.488005894" watchObservedRunningTime="2025-11-28 21:11:35.021350825 +0000 UTC m=+1334.489998744" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.119428 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:35 crc kubenswrapper[4957]: E1128 21:11:35.120049 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" containerName="cinder-db-sync" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.120066 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" containerName="cinder-db-sync" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.120403 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" containerName="cinder-db-sync" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.121922 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.125460 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.127585 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.128271 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.131397 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m5nq9" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.168022 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.201702 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87c6s"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.272123 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.272283 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.272355 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0021599e-b049-4194-b0ab-13434e5bba97-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.272385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-scripts\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.272410 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nql6\" (UniqueName: \"kubernetes.io/projected/0021599e-b049-4194-b0ab-13434e5bba97-kube-api-access-6nql6\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.272467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.300869 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7gvzp"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.302717 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.343955 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7gvzp"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.381479 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.383506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0021599e-b049-4194-b0ab-13434e5bba97-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.383599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0021599e-b049-4194-b0ab-13434e5bba97-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.383712 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-scripts\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.383739 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nql6\" (UniqueName: \"kubernetes.io/projected/0021599e-b049-4194-b0ab-13434e5bba97-kube-api-access-6nql6\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.388568 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.389737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.392778 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.394910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-scripts\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.396845 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.396881 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.405419 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.409149 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nql6\" (UniqueName: \"kubernetes.io/projected/0021599e-b049-4194-b0ab-13434e5bba97-kube-api-access-6nql6\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.409404 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.410404 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.423412 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.493148 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.493581 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-config\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.493636 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-scripts\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.493679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.493899 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.493967 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqd64\" (UniqueName: \"kubernetes.io/projected/93c900fb-97c4-4e17-ae10-873f8d8378f7-kube-api-access-lqd64\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494054 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494115 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494154 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275ab7e7-e574-4b80-805a-327a3edf6fb3-logs\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494177 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data-custom\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494239 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/275ab7e7-e574-4b80-805a-327a3edf6fb3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494273 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.494313 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvnr\" (UniqueName: \"kubernetes.io/projected/275ab7e7-e574-4b80-805a-327a3edf6fb3-kube-api-access-qqvnr\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.514326 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68cfc84946-tg8qn"] Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.519586 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.596536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-scripts\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.596590 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.596622 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597495 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqd64\" (UniqueName: \"kubernetes.io/projected/93c900fb-97c4-4e17-ae10-873f8d8378f7-kube-api-access-lqd64\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597427 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597545 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597672 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597808 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597874 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275ab7e7-e574-4b80-805a-327a3edf6fb3-logs\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597914 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data-custom\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.597971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/275ab7e7-e574-4b80-805a-327a3edf6fb3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598015 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598070 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvnr\" (UniqueName: \"kubernetes.io/projected/275ab7e7-e574-4b80-805a-327a3edf6fb3-kube-api-access-qqvnr\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598129 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598256 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-config\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598323 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275ab7e7-e574-4b80-805a-327a3edf6fb3-logs\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598354 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.598393 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/275ab7e7-e574-4b80-805a-327a3edf6fb3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.599025 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-config\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.601468 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.605319 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-scripts\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.607804 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.611068 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data-custom\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.614284 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.615362 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqd64\" (UniqueName: \"kubernetes.io/projected/93c900fb-97c4-4e17-ae10-873f8d8378f7-kube-api-access-lqd64\") pod \"dnsmasq-dns-5c9776ccc5-7gvzp\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.615531 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvnr\" (UniqueName: \"kubernetes.io/projected/275ab7e7-e574-4b80-805a-327a3edf6fb3-kube-api-access-qqvnr\") pod \"cinder-api-0\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " pod="openstack/cinder-api-0" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.684991 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:35 crc kubenswrapper[4957]: I1128 21:11:35.792814 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.040735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" event={"ID":"cba969ad-04d0-4a30-946d-995723ab4041","Type":"ContainerStarted","Data":"ba5a4c6cac5df5619e5fcfad54e51e6c503ce1867439d3e247679f872b422d6d"} Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.049648 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-768b76c799-g58ls" event={"ID":"d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc","Type":"ContainerStarted","Data":"77fa7208c23880af2fbd43a6d429d1c6bdf5bfe7d3007b07d3f25d1689e61319"} Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.056098 4957 generic.go:334] "Generic (PLEG): container finished" podID="f693abe9-5b02-4359-8522-bc89360df2b0" containerID="19dce4167d490c07875c84ad0d8387aa943adef0205cd020960713fd80a0aff6" exitCode=0 Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.056167 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerDied","Data":"19dce4167d490c07875c84ad0d8387aa943adef0205cd020960713fd80a0aff6"} Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.056190 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f693abe9-5b02-4359-8522-bc89360df2b0","Type":"ContainerDied","Data":"69e7d3511799e55f820d3d973069193a6c98776bdb233fcc51967f5b2b6f98a1"} Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.056201 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e7d3511799e55f820d3d973069193a6c98776bdb233fcc51967f5b2b6f98a1" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.061602 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68cfc84946-tg8qn" event={"ID":"68ad8aac-7884-41b1-8f60-68a111f04c11","Type":"ContainerStarted","Data":"6cfee97afec2778b6f3d6fcad002654ab3fbf68a61c99aab43a177a6dfc21247"} Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.061662 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68cfc84946-tg8qn" event={"ID":"68ad8aac-7884-41b1-8f60-68a111f04c11","Type":"ContainerStarted","Data":"1abded267eb4c05fa30e71ae01e67e655aa99ef880a801c5b8df6244b0e16220"} Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.071400 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d9bff56f6-ghr8m" podStartSLOduration=3.274385424 podStartE2EDuration="6.07138137s" podCreationTimestamp="2025-11-28 21:11:30 +0000 UTC" firstStartedPulling="2025-11-28 21:11:31.853751346 +0000 UTC m=+1331.322399255" lastFinishedPulling="2025-11-28 21:11:34.650747292 +0000 UTC m=+1334.119395201" observedRunningTime="2025-11-28 21:11:36.063191619 +0000 UTC m=+1335.531839528" watchObservedRunningTime="2025-11-28 21:11:36.07138137 +0000 UTC m=+1335.540029279" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.103166 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-768b76c799-g58ls" podStartSLOduration=3.304157028 podStartE2EDuration="6.103149473s" podCreationTimestamp="2025-11-28 21:11:30 +0000 UTC" firstStartedPulling="2025-11-28 21:11:31.853788037 +0000 UTC m=+1331.322435946" lastFinishedPulling="2025-11-28 21:11:34.652780472 +0000 UTC m=+1334.121428391" observedRunningTime="2025-11-28 21:11:36.102333943 +0000 UTC m=+1335.570981852" watchObservedRunningTime="2025-11-28 21:11:36.103149473 +0000 UTC m=+1335.571797382" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.130140 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.249042 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.338451 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-combined-ca-bundle\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.338526 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-scripts\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.338551 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mglkd\" (UniqueName: \"kubernetes.io/projected/f693abe9-5b02-4359-8522-bc89360df2b0-kube-api-access-mglkd\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.338579 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-run-httpd\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.338674 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-log-httpd\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.341735 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.343466 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.368563 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-scripts" (OuterVolumeSpecName: "scripts") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.407751 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f693abe9-5b02-4359-8522-bc89360df2b0-kube-api-access-mglkd" (OuterVolumeSpecName: "kube-api-access-mglkd") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "kube-api-access-mglkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.416847 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7gvzp"] Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.458991 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-sg-core-conf-yaml\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.459410 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-config-data\") pod \"f693abe9-5b02-4359-8522-bc89360df2b0\" (UID: \"f693abe9-5b02-4359-8522-bc89360df2b0\") " Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.460539 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.460562 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.460573 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mglkd\" (UniqueName: \"kubernetes.io/projected/f693abe9-5b02-4359-8522-bc89360df2b0-kube-api-access-mglkd\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.460585 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f693abe9-5b02-4359-8522-bc89360df2b0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.493810 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.496201 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.516454 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.569929 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.569955 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.659329 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-config-data" (OuterVolumeSpecName: "config-data") pod "f693abe9-5b02-4359-8522-bc89360df2b0" (UID: "f693abe9-5b02-4359-8522-bc89360df2b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:36 crc kubenswrapper[4957]: I1128 21:11:36.671934 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f693abe9-5b02-4359-8522-bc89360df2b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.083025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68cfc84946-tg8qn" event={"ID":"68ad8aac-7884-41b1-8f60-68a111f04c11","Type":"ContainerStarted","Data":"e8c53165d1b4789bc391307cb43e08dca7613a80cf9127d0a5b785600b8d6edb"} Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.083404 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.083659 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.087266 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0021599e-b049-4194-b0ab-13434e5bba97","Type":"ContainerStarted","Data":"e7cf32496696c85a52f5bf01075850ed48f8025f4e683928548777bcd6d7891b"} Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.089641 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"275ab7e7-e574-4b80-805a-327a3edf6fb3","Type":"ContainerStarted","Data":"0c96bb17bfaff21f7f5b1a61892849115ab0a9b92cbef75f8bd84690bdef2666"} Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.092679 4957 generic.go:334] "Generic (PLEG): container finished" podID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerID="2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25" exitCode=0 Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.095313 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.095945 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" event={"ID":"93c900fb-97c4-4e17-ae10-873f8d8378f7","Type":"ContainerDied","Data":"2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25"} Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.095980 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" event={"ID":"93c900fb-97c4-4e17-ae10-873f8d8378f7","Type":"ContainerStarted","Data":"fe6f8c0a0d08b0599ce9ba3bdb0ae58695d9f8c626545970bddf1291e5e8405a"} Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.096191 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerName="dnsmasq-dns" containerID="cri-o://fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c" gracePeriod=10 Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.115664 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-68cfc84946-tg8qn" podStartSLOduration=3.115644604 podStartE2EDuration="3.115644604s" podCreationTimestamp="2025-11-28 21:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:37.103845274 +0000 UTC m=+1336.572493183" watchObservedRunningTime="2025-11-28 21:11:37.115644604 +0000 UTC m=+1336.584292513" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.209328 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.239154 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.252184 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:11:37 crc kubenswrapper[4957]: E1128 21:11:37.252686 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-central-agent" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.252705 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-central-agent" Nov 28 21:11:37 crc kubenswrapper[4957]: E1128 21:11:37.252730 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="proxy-httpd" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.252737 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="proxy-httpd" Nov 28 21:11:37 crc kubenswrapper[4957]: E1128 21:11:37.252764 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="sg-core" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.252770 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="sg-core" Nov 28 21:11:37 crc kubenswrapper[4957]: E1128 21:11:37.252787 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-notification-agent" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.252794 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-notification-agent" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.252998 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="sg-core" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.253011 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="proxy-httpd" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.253025 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-central-agent" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.253045 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" containerName="ceilometer-notification-agent" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.255412 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.259144 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.259869 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.269545 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.397820 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-run-httpd\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.397870 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.397893 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-scripts\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.397945 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-log-httpd\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.397969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfpq\" (UniqueName: \"kubernetes.io/projected/621a0725-4c27-47d3-be24-89a00de305b0-kube-api-access-6dfpq\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.398000 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.398057 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-config-data\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.499773 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-run-httpd\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500033 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500058 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-scripts\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500112 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-log-httpd\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500150 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfpq\" (UniqueName: \"kubernetes.io/projected/621a0725-4c27-47d3-be24-89a00de305b0-kube-api-access-6dfpq\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500258 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-config-data\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500377 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-run-httpd\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.500665 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-log-httpd\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.508634 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.511518 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-scripts\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.522582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.526374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-config-data\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.546024 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfpq\" (UniqueName: \"kubernetes.io/projected/621a0725-4c27-47d3-be24-89a00de305b0-kube-api-access-6dfpq\") pod \"ceilometer-0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.719908 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:11:37 crc kubenswrapper[4957]: I1128 21:11:37.864621 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.078728 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.173618 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"275ab7e7-e574-4b80-805a-327a3edf6fb3","Type":"ContainerStarted","Data":"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8"} Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.181100 4957 generic.go:334] "Generic (PLEG): container finished" podID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerID="fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c" exitCode=0 Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.181177 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" event={"ID":"7e932e95-e1e7-4894-82fe-de55d3f79981","Type":"ContainerDied","Data":"fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c"} Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.181227 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" event={"ID":"7e932e95-e1e7-4894-82fe-de55d3f79981","Type":"ContainerDied","Data":"9cda1ac5718feeeee74ebdb3409041b1771cbd69b1d0c3347482a6f8efb441eb"} Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.181251 4957 scope.go:117] "RemoveContainer" containerID="fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.181410 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87c6s" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.205646 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" event={"ID":"93c900fb-97c4-4e17-ae10-873f8d8378f7","Type":"ContainerStarted","Data":"3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316"} Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.205705 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.224261 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8nl6\" (UniqueName: \"kubernetes.io/projected/7e932e95-e1e7-4894-82fe-de55d3f79981-kube-api-access-p8nl6\") pod \"7e932e95-e1e7-4894-82fe-de55d3f79981\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.224342 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-sb\") pod \"7e932e95-e1e7-4894-82fe-de55d3f79981\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.224399 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-config\") pod \"7e932e95-e1e7-4894-82fe-de55d3f79981\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.224601 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-svc\") pod \"7e932e95-e1e7-4894-82fe-de55d3f79981\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.224678 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-swift-storage-0\") pod \"7e932e95-e1e7-4894-82fe-de55d3f79981\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.224783 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-nb\") pod \"7e932e95-e1e7-4894-82fe-de55d3f79981\" (UID: \"7e932e95-e1e7-4894-82fe-de55d3f79981\") " Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.233998 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e932e95-e1e7-4894-82fe-de55d3f79981-kube-api-access-p8nl6" (OuterVolumeSpecName: "kube-api-access-p8nl6") pod "7e932e95-e1e7-4894-82fe-de55d3f79981" (UID: "7e932e95-e1e7-4894-82fe-de55d3f79981"). InnerVolumeSpecName "kube-api-access-p8nl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.241592 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" podStartSLOduration=3.241028926 podStartE2EDuration="3.241028926s" podCreationTimestamp="2025-11-28 21:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:38.229762499 +0000 UTC m=+1337.698410418" watchObservedRunningTime="2025-11-28 21:11:38.241028926 +0000 UTC m=+1337.709676835" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.321704 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.332008 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8nl6\" (UniqueName: \"kubernetes.io/projected/7e932e95-e1e7-4894-82fe-de55d3f79981-kube-api-access-p8nl6\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.346479 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e932e95-e1e7-4894-82fe-de55d3f79981" (UID: "7e932e95-e1e7-4894-82fe-de55d3f79981"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.350559 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7e932e95-e1e7-4894-82fe-de55d3f79981" (UID: "7e932e95-e1e7-4894-82fe-de55d3f79981"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.362892 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e932e95-e1e7-4894-82fe-de55d3f79981" (UID: "7e932e95-e1e7-4894-82fe-de55d3f79981"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.365656 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-config" (OuterVolumeSpecName: "config") pod "7e932e95-e1e7-4894-82fe-de55d3f79981" (UID: "7e932e95-e1e7-4894-82fe-de55d3f79981"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.374910 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e932e95-e1e7-4894-82fe-de55d3f79981" (UID: "7e932e95-e1e7-4894-82fe-de55d3f79981"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.434535 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.434569 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.434580 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.434589 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.434599 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e932e95-e1e7-4894-82fe-de55d3f79981-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.529752 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87c6s"] Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.545200 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87c6s"] Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.834994 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" path="/var/lib/kubelet/pods/7e932e95-e1e7-4894-82fe-de55d3f79981/volumes" Nov 28 21:11:38 crc kubenswrapper[4957]: I1128 21:11:38.835939 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f693abe9-5b02-4359-8522-bc89360df2b0" path="/var/lib/kubelet/pods/f693abe9-5b02-4359-8522-bc89360df2b0/volumes" Nov 28 21:11:39 crc kubenswrapper[4957]: I1128 21:11:39.024653 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:39 crc kubenswrapper[4957]: I1128 21:11:39.267711 4957 scope.go:117] "RemoveContainer" containerID="525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf" Nov 28 21:11:39 crc kubenswrapper[4957]: I1128 21:11:39.353700 4957 scope.go:117] "RemoveContainer" containerID="fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c" Nov 28 21:11:39 crc kubenswrapper[4957]: E1128 21:11:39.353993 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c\": container with ID starting with fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c not found: ID does not exist" containerID="fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c" Nov 28 21:11:39 crc kubenswrapper[4957]: I1128 21:11:39.354015 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c"} err="failed to get container status \"fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c\": rpc error: code = NotFound desc = could not find container \"fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c\": container with ID starting with fa3e6ae46789e1f92ea010eb0b94c73fed46de4ef23a722a0204f9c4fa95652c not found: ID does not exist" Nov 28 21:11:39 crc kubenswrapper[4957]: I1128 21:11:39.354034 4957 scope.go:117] "RemoveContainer" containerID="525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf" Nov 28 21:11:39 crc kubenswrapper[4957]: E1128 21:11:39.354247 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf\": container with ID starting with 525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf not found: ID does not exist" containerID="525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf" Nov 28 21:11:39 crc kubenswrapper[4957]: I1128 21:11:39.354267 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf"} err="failed to get container status \"525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf\": rpc error: code = NotFound desc = could not find container \"525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf\": container with ID starting with 525da17a5cf628f7dbaad4a06740789d25af3068ed2b11ab328874751d78cbaf not found: ID does not exist" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.253729 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"275ab7e7-e574-4b80-805a-327a3edf6fb3","Type":"ContainerStarted","Data":"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95"} Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.254323 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.253834 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api" containerID="cri-o://f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95" gracePeriod=30 Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.253796 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api-log" containerID="cri-o://37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8" gracePeriod=30 Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.283233 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerStarted","Data":"611056a1489b790eab2128b2a605128fa652d4d78f4feb31d03e12705c475766"} Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.283283 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerStarted","Data":"3fcf0bc871fe4cd8836233e1e2469340c986566e139daa76f1ed64c880e70a4e"} Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.293197 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.293176848 podStartE2EDuration="5.293176848s" podCreationTimestamp="2025-11-28 21:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:40.273017151 +0000 UTC m=+1339.741665060" watchObservedRunningTime="2025-11-28 21:11:40.293176848 +0000 UTC m=+1339.761824757" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.293180 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0021599e-b049-4194-b0ab-13434e5bba97","Type":"ContainerStarted","Data":"206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5"} Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.846379 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.887846 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-combined-ca-bundle\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.887921 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275ab7e7-e574-4b80-805a-327a3edf6fb3-logs\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.887954 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-scripts\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.888016 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data-custom\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.888061 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvnr\" (UniqueName: \"kubernetes.io/projected/275ab7e7-e574-4b80-805a-327a3edf6fb3-kube-api-access-qqvnr\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.888182 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/275ab7e7-e574-4b80-805a-327a3edf6fb3-etc-machine-id\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.888339 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data\") pod \"275ab7e7-e574-4b80-805a-327a3edf6fb3\" (UID: \"275ab7e7-e574-4b80-805a-327a3edf6fb3\") " Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.932956 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/275ab7e7-e574-4b80-805a-327a3edf6fb3-logs" (OuterVolumeSpecName: "logs") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.933307 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.933392 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/275ab7e7-e574-4b80-805a-327a3edf6fb3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.934769 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275ab7e7-e574-4b80-805a-327a3edf6fb3-kube-api-access-qqvnr" (OuterVolumeSpecName: "kube-api-access-qqvnr") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "kube-api-access-qqvnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.938305 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-scripts" (OuterVolumeSpecName: "scripts") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.961503 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.961951 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data" (OuterVolumeSpecName: "config-data") pod "275ab7e7-e574-4b80-805a-327a3edf6fb3" (UID: "275ab7e7-e574-4b80-805a-327a3edf6fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993507 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/275ab7e7-e574-4b80-805a-327a3edf6fb3-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993868 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993887 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993906 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvnr\" (UniqueName: \"kubernetes.io/projected/275ab7e7-e574-4b80-805a-327a3edf6fb3-kube-api-access-qqvnr\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993920 4957 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/275ab7e7-e574-4b80-805a-327a3edf6fb3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993932 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:40 crc kubenswrapper[4957]: I1128 21:11:40.993946 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275ab7e7-e574-4b80-805a-327a3edf6fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311501 4957 generic.go:334] "Generic (PLEG): container finished" podID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerID="f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95" exitCode=0 Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311536 4957 generic.go:334] "Generic (PLEG): container finished" podID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerID="37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8" exitCode=143 Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311565 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311634 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"275ab7e7-e574-4b80-805a-327a3edf6fb3","Type":"ContainerDied","Data":"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95"} Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311668 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"275ab7e7-e574-4b80-805a-327a3edf6fb3","Type":"ContainerDied","Data":"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8"} Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311678 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"275ab7e7-e574-4b80-805a-327a3edf6fb3","Type":"ContainerDied","Data":"0c96bb17bfaff21f7f5b1a61892849115ab0a9b92cbef75f8bd84690bdef2666"} Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.311694 4957 scope.go:117] "RemoveContainer" containerID="f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.316307 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerStarted","Data":"ec4907805aeb4c1bd543ebb32bf8826bc7268793da55f5c09744d2a7d028a825"} Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.318728 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0021599e-b049-4194-b0ab-13434e5bba97","Type":"ContainerStarted","Data":"b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2"} Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.341599 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.243149253 podStartE2EDuration="6.341580662s" podCreationTimestamp="2025-11-28 21:11:35 +0000 UTC" firstStartedPulling="2025-11-28 21:11:36.195307274 +0000 UTC m=+1335.663955183" lastFinishedPulling="2025-11-28 21:11:37.293738683 +0000 UTC m=+1336.762386592" observedRunningTime="2025-11-28 21:11:41.335464152 +0000 UTC m=+1340.804112061" watchObservedRunningTime="2025-11-28 21:11:41.341580662 +0000 UTC m=+1340.810228571" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.482025 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59d88444bf-br9dz" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.510983 4957 scope.go:117] "RemoveContainer" containerID="37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.565237 4957 scope.go:117] "RemoveContainer" containerID="f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95" Nov 28 21:11:41 crc kubenswrapper[4957]: E1128 21:11:41.576171 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95\": container with ID starting with f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95 not found: ID does not exist" containerID="f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.576236 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95"} err="failed to get container status \"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95\": rpc error: code = NotFound desc = could not find container \"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95\": container with ID starting with f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95 not found: ID does not exist" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.576268 4957 scope.go:117] "RemoveContainer" containerID="37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8" Nov 28 21:11:41 crc kubenswrapper[4957]: E1128 21:11:41.579505 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8\": container with ID starting with 37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8 not found: ID does not exist" containerID="37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.579552 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8"} err="failed to get container status \"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8\": rpc error: code = NotFound desc = could not find container \"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8\": container with ID starting with 37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8 not found: ID does not exist" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.579582 4957 scope.go:117] "RemoveContainer" containerID="f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.579636 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.584546 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95"} err="failed to get container status \"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95\": rpc error: code = NotFound desc = could not find container \"f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95\": container with ID starting with f9469c8b30c091715b0f41c8f308599e5dcae9a4109c13646c78c56bb4971d95 not found: ID does not exist" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.584600 4957 scope.go:117] "RemoveContainer" containerID="37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.587098 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8"} err="failed to get container status \"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8\": rpc error: code = NotFound desc = could not find container \"37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8\": container with ID starting with 37e56f2e46a29c0a8d547cf50716211fb56fb1c9b6ff82a36dfccf3257e27ca8 not found: ID does not exist" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.602248 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.618191 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76cf686b44-wd445"] Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.618513 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76cf686b44-wd445" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-api" containerID="cri-o://649bfd69fec0adf02d30894586ad0679acec643886fbf82bdd84848c58486af0" gracePeriod=30 Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.621413 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76cf686b44-wd445" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-httpd" containerID="cri-o://c4f6936235945a71098c1bbfaab3a32faf3fedb9d6fa2a097b958360cffe85de" gracePeriod=30 Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.632920 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:41 crc kubenswrapper[4957]: E1128 21:11:41.633445 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.633461 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api" Nov 28 21:11:41 crc kubenswrapper[4957]: E1128 21:11:41.633779 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api-log" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.633787 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api-log" Nov 28 21:11:41 crc kubenswrapper[4957]: E1128 21:11:41.633805 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerName="init" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.633811 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerName="init" Nov 28 21:11:41 crc kubenswrapper[4957]: E1128 21:11:41.633820 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerName="dnsmasq-dns" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.633826 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerName="dnsmasq-dns" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.634069 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e932e95-e1e7-4894-82fe-de55d3f79981" containerName="dnsmasq-dns" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.634094 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api-log" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.634109 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" containerName="cinder-api" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.654203 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.664132 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.664675 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.665400 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.665418 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.710881 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9e0f02-f5be-405c-8318-834ef79136be-logs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.710943 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-config-data-custom\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711003 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-scripts\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-config-data\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711082 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711101 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711140 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b9e0f02-f5be-405c-8318-834ef79136be-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.711192 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5hg\" (UniqueName: \"kubernetes.io/projected/7b9e0f02-f5be-405c-8318-834ef79136be-kube-api-access-cm5hg\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812498 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b9e0f02-f5be-405c-8318-834ef79136be-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812570 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm5hg\" (UniqueName: \"kubernetes.io/projected/7b9e0f02-f5be-405c-8318-834ef79136be-kube-api-access-cm5hg\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812635 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9e0f02-f5be-405c-8318-834ef79136be-logs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812669 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-config-data-custom\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812721 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812743 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-scripts\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812768 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-config-data\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812804 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.812827 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.822602 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b9e0f02-f5be-405c-8318-834ef79136be-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.823650 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-scripts\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.823679 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.826762 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.827463 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9e0f02-f5be-405c-8318-834ef79136be-logs\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.830184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-config-data\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.831459 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.840365 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9e0f02-f5be-405c-8318-834ef79136be-config-data-custom\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.844810 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm5hg\" (UniqueName: \"kubernetes.io/projected/7b9e0f02-f5be-405c-8318-834ef79136be-kube-api-access-cm5hg\") pod \"cinder-api-0\" (UID: \"7b9e0f02-f5be-405c-8318-834ef79136be\") " pod="openstack/cinder-api-0" Nov 28 21:11:41 crc kubenswrapper[4957]: I1128 21:11:41.988911 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 21:11:42 crc kubenswrapper[4957]: I1128 21:11:42.346537 4957 generic.go:334] "Generic (PLEG): container finished" podID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerID="c4f6936235945a71098c1bbfaab3a32faf3fedb9d6fa2a097b958360cffe85de" exitCode=0 Nov 28 21:11:42 crc kubenswrapper[4957]: I1128 21:11:42.346599 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cf686b44-wd445" event={"ID":"172c2933-99bf-430b-bae8-2e66b1d8c0c0","Type":"ContainerDied","Data":"c4f6936235945a71098c1bbfaab3a32faf3fedb9d6fa2a097b958360cffe85de"} Nov 28 21:11:42 crc kubenswrapper[4957]: I1128 21:11:42.350267 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerStarted","Data":"584d339a5c1d9b6a518229dbb0c6c7d45bc10fd58da7ffbf64fb0cef8c466009"} Nov 28 21:11:42 crc kubenswrapper[4957]: I1128 21:11:42.483570 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 21:11:42 crc kubenswrapper[4957]: W1128 21:11:42.486439 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9e0f02_f5be_405c_8318_834ef79136be.slice/crio-be9e0e30d844dca359f3f5d1af7729a2c38f6a11ce079e2d3587af32246547bf WatchSource:0}: Error finding container be9e0e30d844dca359f3f5d1af7729a2c38f6a11ce079e2d3587af32246547bf: Status 404 returned error can't find the container with id be9e0e30d844dca359f3f5d1af7729a2c38f6a11ce079e2d3587af32246547bf Nov 28 21:11:42 crc kubenswrapper[4957]: I1128 21:11:42.827561 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275ab7e7-e574-4b80-805a-327a3edf6fb3" path="/var/lib/kubelet/pods/275ab7e7-e574-4b80-805a-327a3edf6fb3/volumes" Nov 28 21:11:43 crc kubenswrapper[4957]: I1128 21:11:43.239764 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:43 crc kubenswrapper[4957]: I1128 21:11:43.242490 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:43 crc kubenswrapper[4957]: I1128 21:11:43.377579 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7b9e0f02-f5be-405c-8318-834ef79136be","Type":"ContainerStarted","Data":"cb96ac7fc7da98504153ed8a5defc59ac99f42f9b051ad7f7c64fd1d3da136d6"} Nov 28 21:11:43 crc kubenswrapper[4957]: I1128 21:11:43.378726 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7b9e0f02-f5be-405c-8318-834ef79136be","Type":"ContainerStarted","Data":"be9e0e30d844dca359f3f5d1af7729a2c38f6a11ce079e2d3587af32246547bf"} Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.409635 4957 generic.go:334] "Generic (PLEG): container finished" podID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerID="649bfd69fec0adf02d30894586ad0679acec643886fbf82bdd84848c58486af0" exitCode=0 Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.409696 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cf686b44-wd445" event={"ID":"172c2933-99bf-430b-bae8-2e66b1d8c0c0","Type":"ContainerDied","Data":"649bfd69fec0adf02d30894586ad0679acec643886fbf82bdd84848c58486af0"} Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.414173 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7b9e0f02-f5be-405c-8318-834ef79136be","Type":"ContainerStarted","Data":"4a1fc65015a4c0fa7ed289282c9a7bbedba2d90759e1f1e0bc4fb2c84f67316f"} Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.414304 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.423580 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerStarted","Data":"fda487910037be59e3b0dc5854eddf183657d8eb3c5b21bf8974952d6e373a99"} Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.423827 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.445543 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.445526453 podStartE2EDuration="3.445526453s" podCreationTimestamp="2025-11-28 21:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:44.435982168 +0000 UTC m=+1343.904630117" watchObservedRunningTime="2025-11-28 21:11:44.445526453 +0000 UTC m=+1343.914174352" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.475735 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.276930177 podStartE2EDuration="7.475718317s" podCreationTimestamp="2025-11-28 21:11:37 +0000 UTC" firstStartedPulling="2025-11-28 21:11:39.292308463 +0000 UTC m=+1338.760956372" lastFinishedPulling="2025-11-28 21:11:43.491096603 +0000 UTC m=+1342.959744512" observedRunningTime="2025-11-28 21:11:44.460613175 +0000 UTC m=+1343.929261084" watchObservedRunningTime="2025-11-28 21:11:44.475718317 +0000 UTC m=+1343.944366226" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.549051 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.575243 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-config\") pod \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.575560 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vqqf\" (UniqueName: \"kubernetes.io/projected/172c2933-99bf-430b-bae8-2e66b1d8c0c0-kube-api-access-4vqqf\") pod \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.575947 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-ovndb-tls-certs\") pod \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.576272 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-httpd-config\") pod \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.576418 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-combined-ca-bundle\") pod \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\" (UID: \"172c2933-99bf-430b-bae8-2e66b1d8c0c0\") " Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.600353 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "172c2933-99bf-430b-bae8-2e66b1d8c0c0" (UID: "172c2933-99bf-430b-bae8-2e66b1d8c0c0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.601857 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172c2933-99bf-430b-bae8-2e66b1d8c0c0-kube-api-access-4vqqf" (OuterVolumeSpecName: "kube-api-access-4vqqf") pod "172c2933-99bf-430b-bae8-2e66b1d8c0c0" (UID: "172c2933-99bf-430b-bae8-2e66b1d8c0c0"). InnerVolumeSpecName "kube-api-access-4vqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.657656 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-config" (OuterVolumeSpecName: "config") pod "172c2933-99bf-430b-bae8-2e66b1d8c0c0" (UID: "172c2933-99bf-430b-bae8-2e66b1d8c0c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.660796 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "172c2933-99bf-430b-bae8-2e66b1d8c0c0" (UID: "172c2933-99bf-430b-bae8-2e66b1d8c0c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.679429 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.679457 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.679468 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.679476 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vqqf\" (UniqueName: \"kubernetes.io/projected/172c2933-99bf-430b-bae8-2e66b1d8c0c0-kube-api-access-4vqqf\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.690262 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "172c2933-99bf-430b-bae8-2e66b1d8c0c0" (UID: "172c2933-99bf-430b-bae8-2e66b1d8c0c0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:44 crc kubenswrapper[4957]: I1128 21:11:44.781867 4957 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/172c2933-99bf-430b-bae8-2e66b1d8c0c0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.438521 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cf686b44-wd445" event={"ID":"172c2933-99bf-430b-bae8-2e66b1d8c0c0","Type":"ContainerDied","Data":"79c31151bf2b1c9ea85ba97e3f03c65183393b0ec93c3de9d362060a50e2a898"} Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.438579 4957 scope.go:117] "RemoveContainer" containerID="c4f6936235945a71098c1bbfaab3a32faf3fedb9d6fa2a097b958360cffe85de" Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.439592 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76cf686b44-wd445" Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.476658 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76cf686b44-wd445"] Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.486093 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-76cf686b44-wd445"] Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.494464 4957 scope.go:117] "RemoveContainer" containerID="649bfd69fec0adf02d30894586ad0679acec643886fbf82bdd84848c58486af0" Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.521156 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.687391 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.754129 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t9fwz"] Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.756716 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" podUID="fdc03480-9a62-4730-a35b-7335deece98a" containerName="dnsmasq-dns" containerID="cri-o://09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8" gracePeriod=10 Nov 28 21:11:45 crc kubenswrapper[4957]: I1128 21:11:45.770924 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.226690 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.342886 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-68cfc84946-tg8qn" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.416566 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8546f4f854-p9lnp"] Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.416794 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8546f4f854-p9lnp" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api-log" containerID="cri-o://fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9" gracePeriod=30 Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.417185 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8546f4f854-p9lnp" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api" containerID="cri-o://9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196" gracePeriod=30 Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.471250 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.475533 4957 generic.go:334] "Generic (PLEG): container finished" podID="fdc03480-9a62-4730-a35b-7335deece98a" containerID="09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8" exitCode=0 Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.478029 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" event={"ID":"fdc03480-9a62-4730-a35b-7335deece98a","Type":"ContainerDied","Data":"09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8"} Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.478069 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" event={"ID":"fdc03480-9a62-4730-a35b-7335deece98a","Type":"ContainerDied","Data":"09d43eb2f552e394d1416e81781852a1745feb0ff8aaf3ccd5b05e076effceff"} Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.478089 4957 scope.go:117] "RemoveContainer" containerID="09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.527189 4957 scope.go:117] "RemoveContainer" containerID="8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.533956 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-svc\") pod \"fdc03480-9a62-4730-a35b-7335deece98a\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.534558 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-sb\") pod \"fdc03480-9a62-4730-a35b-7335deece98a\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.534813 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbjr6\" (UniqueName: \"kubernetes.io/projected/fdc03480-9a62-4730-a35b-7335deece98a-kube-api-access-fbjr6\") pod \"fdc03480-9a62-4730-a35b-7335deece98a\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.534970 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-nb\") pod \"fdc03480-9a62-4730-a35b-7335deece98a\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.535069 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-config\") pod \"fdc03480-9a62-4730-a35b-7335deece98a\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.535169 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-swift-storage-0\") pod \"fdc03480-9a62-4730-a35b-7335deece98a\" (UID: \"fdc03480-9a62-4730-a35b-7335deece98a\") " Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.558048 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc03480-9a62-4730-a35b-7335deece98a-kube-api-access-fbjr6" (OuterVolumeSpecName: "kube-api-access-fbjr6") pod "fdc03480-9a62-4730-a35b-7335deece98a" (UID: "fdc03480-9a62-4730-a35b-7335deece98a"). InnerVolumeSpecName "kube-api-access-fbjr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.570992 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.642932 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbjr6\" (UniqueName: \"kubernetes.io/projected/fdc03480-9a62-4730-a35b-7335deece98a-kube-api-access-fbjr6\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.650156 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdc03480-9a62-4730-a35b-7335deece98a" (UID: "fdc03480-9a62-4730-a35b-7335deece98a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.660862 4957 scope.go:117] "RemoveContainer" containerID="09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.664573 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdc03480-9a62-4730-a35b-7335deece98a" (UID: "fdc03480-9a62-4730-a35b-7335deece98a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:46 crc kubenswrapper[4957]: E1128 21:11:46.664714 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8\": container with ID starting with 09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8 not found: ID does not exist" containerID="09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.664746 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8"} err="failed to get container status \"09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8\": rpc error: code = NotFound desc = could not find container \"09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8\": container with ID starting with 09a30a931550c41e6b04bbaa2e02a1b556ae0c98713eef7ca6d2102349dfd9f8 not found: ID does not exist" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.664769 4957 scope.go:117] "RemoveContainer" containerID="8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5" Nov 28 21:11:46 crc kubenswrapper[4957]: E1128 21:11:46.666935 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5\": container with ID starting with 8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5 not found: ID does not exist" containerID="8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.666963 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5"} err="failed to get container status \"8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5\": rpc error: code = NotFound desc = could not find container \"8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5\": container with ID starting with 8b524fd8ff5b9c0101238417dc293a93176cf5ca27a2e5d937ab852ae2d07fa5 not found: ID does not exist" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.669425 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdc03480-9a62-4730-a35b-7335deece98a" (UID: "fdc03480-9a62-4730-a35b-7335deece98a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.688151 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdc03480-9a62-4730-a35b-7335deece98a" (UID: "fdc03480-9a62-4730-a35b-7335deece98a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.720192 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-config" (OuterVolumeSpecName: "config") pod "fdc03480-9a62-4730-a35b-7335deece98a" (UID: "fdc03480-9a62-4730-a35b-7335deece98a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.745734 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.745768 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.745777 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.745787 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.745800 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdc03480-9a62-4730-a35b-7335deece98a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:46 crc kubenswrapper[4957]: I1128 21:11:46.885829 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" path="/var/lib/kubelet/pods/172c2933-99bf-430b-bae8-2e66b1d8c0c0/volumes" Nov 28 21:11:46 crc kubenswrapper[4957]: E1128 21:11:46.980131 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc03480_9a62_4730_a35b_7335deece98a.slice/crio-09d43eb2f552e394d1416e81781852a1745feb0ff8aaf3ccd5b05e076effceff\": RecentStats: unable to find data in memory cache]" Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.301693 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.489363 4957 generic.go:334] "Generic (PLEG): container finished" podID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerID="fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9" exitCode=143 Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.489516 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8546f4f854-p9lnp" event={"ID":"620c4ab0-7219-4cca-893e-e1e26bc9a927","Type":"ContainerDied","Data":"fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9"} Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.492784 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t9fwz" Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.492951 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="cinder-scheduler" containerID="cri-o://206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5" gracePeriod=30 Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.493077 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="probe" containerID="cri-o://b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2" gracePeriod=30 Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.529780 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t9fwz"] Nov 28 21:11:47 crc kubenswrapper[4957]: I1128 21:11:47.542326 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t9fwz"] Nov 28 21:11:48 crc kubenswrapper[4957]: I1128 21:11:48.258680 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-956cd8448-hm6cs" Nov 28 21:11:48 crc kubenswrapper[4957]: I1128 21:11:48.518563 4957 generic.go:334] "Generic (PLEG): container finished" podID="0021599e-b049-4194-b0ab-13434e5bba97" containerID="b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2" exitCode=0 Nov 28 21:11:48 crc kubenswrapper[4957]: I1128 21:11:48.518611 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0021599e-b049-4194-b0ab-13434e5bba97","Type":"ContainerDied","Data":"b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2"} Nov 28 21:11:48 crc kubenswrapper[4957]: I1128 21:11:48.828531 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc03480-9a62-4730-a35b-7335deece98a" path="/var/lib/kubelet/pods/fdc03480-9a62-4730-a35b-7335deece98a/volumes" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.188542 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.247578 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data-custom\") pod \"620c4ab0-7219-4cca-893e-e1e26bc9a927\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.247786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data\") pod \"620c4ab0-7219-4cca-893e-e1e26bc9a927\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.248149 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620c4ab0-7219-4cca-893e-e1e26bc9a927-logs\") pod \"620c4ab0-7219-4cca-893e-e1e26bc9a927\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.248222 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-combined-ca-bundle\") pod \"620c4ab0-7219-4cca-893e-e1e26bc9a927\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.248275 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhlxl\" (UniqueName: \"kubernetes.io/projected/620c4ab0-7219-4cca-893e-e1e26bc9a927-kube-api-access-fhlxl\") pod \"620c4ab0-7219-4cca-893e-e1e26bc9a927\" (UID: \"620c4ab0-7219-4cca-893e-e1e26bc9a927\") " Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.253560 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620c4ab0-7219-4cca-893e-e1e26bc9a927-logs" (OuterVolumeSpecName: "logs") pod "620c4ab0-7219-4cca-893e-e1e26bc9a927" (UID: "620c4ab0-7219-4cca-893e-e1e26bc9a927"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.262632 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "620c4ab0-7219-4cca-893e-e1e26bc9a927" (UID: "620c4ab0-7219-4cca-893e-e1e26bc9a927"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.266429 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620c4ab0-7219-4cca-893e-e1e26bc9a927-kube-api-access-fhlxl" (OuterVolumeSpecName: "kube-api-access-fhlxl") pod "620c4ab0-7219-4cca-893e-e1e26bc9a927" (UID: "620c4ab0-7219-4cca-893e-e1e26bc9a927"). InnerVolumeSpecName "kube-api-access-fhlxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.297430 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "620c4ab0-7219-4cca-893e-e1e26bc9a927" (UID: "620c4ab0-7219-4cca-893e-e1e26bc9a927"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.345890 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data" (OuterVolumeSpecName: "config-data") pod "620c4ab0-7219-4cca-893e-e1e26bc9a927" (UID: "620c4ab0-7219-4cca-893e-e1e26bc9a927"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.352717 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620c4ab0-7219-4cca-893e-e1e26bc9a927-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.352751 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.352763 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhlxl\" (UniqueName: \"kubernetes.io/projected/620c4ab0-7219-4cca-893e-e1e26bc9a927-kube-api-access-fhlxl\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.352774 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.352783 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620c4ab0-7219-4cca-893e-e1e26bc9a927-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.555791 4957 generic.go:334] "Generic (PLEG): container finished" podID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerID="9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196" exitCode=0 Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.555836 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8546f4f854-p9lnp" event={"ID":"620c4ab0-7219-4cca-893e-e1e26bc9a927","Type":"ContainerDied","Data":"9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196"} Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.555851 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8546f4f854-p9lnp" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.555870 4957 scope.go:117] "RemoveContainer" containerID="9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.555860 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8546f4f854-p9lnp" event={"ID":"620c4ab0-7219-4cca-893e-e1e26bc9a927","Type":"ContainerDied","Data":"ab159917fa8dbdd70fdbef68dd9015f1518215c93f0dbe5929909ac6455301ef"} Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.588302 4957 scope.go:117] "RemoveContainer" containerID="fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.596326 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8546f4f854-p9lnp"] Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.603959 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8546f4f854-p9lnp"] Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.606834 4957 scope.go:117] "RemoveContainer" containerID="9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196" Nov 28 21:11:50 crc kubenswrapper[4957]: E1128 21:11:50.607276 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196\": container with ID starting with 9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196 not found: ID does not exist" containerID="9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.607321 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196"} err="failed to get container status \"9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196\": rpc error: code = NotFound desc = could not find container \"9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196\": container with ID starting with 9f31c4906ad05be05ff34a6f51ff2bfc4a6dfcc2ea2e103bc13fc6903e702196 not found: ID does not exist" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.607343 4957 scope.go:117] "RemoveContainer" containerID="fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9" Nov 28 21:11:50 crc kubenswrapper[4957]: E1128 21:11:50.607727 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9\": container with ID starting with fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9 not found: ID does not exist" containerID="fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.607777 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9"} err="failed to get container status \"fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9\": rpc error: code = NotFound desc = could not find container \"fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9\": container with ID starting with fa627789836851a44a864f570282ae1e6c0251a5d14030075105ca45fa83cba9 not found: ID does not exist" Nov 28 21:11:50 crc kubenswrapper[4957]: I1128 21:11:50.826198 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" path="/var/lib/kubelet/pods/620c4ab0-7219-4cca-893e-e1e26bc9a927/volumes" Nov 28 21:11:51 crc kubenswrapper[4957]: I1128 21:11:51.974806 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.095037 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data\") pod \"0021599e-b049-4194-b0ab-13434e5bba97\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.095161 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data-custom\") pod \"0021599e-b049-4194-b0ab-13434e5bba97\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.095263 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-scripts\") pod \"0021599e-b049-4194-b0ab-13434e5bba97\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.095340 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-combined-ca-bundle\") pod \"0021599e-b049-4194-b0ab-13434e5bba97\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.095481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0021599e-b049-4194-b0ab-13434e5bba97-etc-machine-id\") pod \"0021599e-b049-4194-b0ab-13434e5bba97\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.095528 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nql6\" (UniqueName: \"kubernetes.io/projected/0021599e-b049-4194-b0ab-13434e5bba97-kube-api-access-6nql6\") pod \"0021599e-b049-4194-b0ab-13434e5bba97\" (UID: \"0021599e-b049-4194-b0ab-13434e5bba97\") " Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.096742 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0021599e-b049-4194-b0ab-13434e5bba97-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0021599e-b049-4194-b0ab-13434e5bba97" (UID: "0021599e-b049-4194-b0ab-13434e5bba97"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.104518 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0021599e-b049-4194-b0ab-13434e5bba97-kube-api-access-6nql6" (OuterVolumeSpecName: "kube-api-access-6nql6") pod "0021599e-b049-4194-b0ab-13434e5bba97" (UID: "0021599e-b049-4194-b0ab-13434e5bba97"). InnerVolumeSpecName "kube-api-access-6nql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.120664 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-scripts" (OuterVolumeSpecName: "scripts") pod "0021599e-b049-4194-b0ab-13434e5bba97" (UID: "0021599e-b049-4194-b0ab-13434e5bba97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.121380 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0021599e-b049-4194-b0ab-13434e5bba97" (UID: "0021599e-b049-4194-b0ab-13434e5bba97"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.165977 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0021599e-b049-4194-b0ab-13434e5bba97" (UID: "0021599e-b049-4194-b0ab-13434e5bba97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.197910 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nql6\" (UniqueName: \"kubernetes.io/projected/0021599e-b049-4194-b0ab-13434e5bba97-kube-api-access-6nql6\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.197940 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.197950 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.197962 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.197970 4957 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0021599e-b049-4194-b0ab-13434e5bba97-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.224361 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data" (OuterVolumeSpecName: "config-data") pod "0021599e-b049-4194-b0ab-13434e5bba97" (UID: "0021599e-b049-4194-b0ab-13434e5bba97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.299569 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0021599e-b049-4194-b0ab-13434e5bba97-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.579026 4957 generic.go:334] "Generic (PLEG): container finished" podID="0021599e-b049-4194-b0ab-13434e5bba97" containerID="206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5" exitCode=0 Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.579072 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0021599e-b049-4194-b0ab-13434e5bba97","Type":"ContainerDied","Data":"206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5"} Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.579083 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.579100 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0021599e-b049-4194-b0ab-13434e5bba97","Type":"ContainerDied","Data":"e7cf32496696c85a52f5bf01075850ed48f8025f4e683928548777bcd6d7891b"} Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.579117 4957 scope.go:117] "RemoveContainer" containerID="b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.606466 4957 scope.go:117] "RemoveContainer" containerID="206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.629626 4957 scope.go:117] "RemoveContainer" containerID="b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.630479 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2\": container with ID starting with b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2 not found: ID does not exist" containerID="b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.630538 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2"} err="failed to get container status \"b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2\": rpc error: code = NotFound desc = could not find container \"b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2\": container with ID starting with b4554134fbf3e8a126cc302ad011bae72c6eeb40a57193a4187957c0073eacb2 not found: ID does not exist" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.630573 4957 scope.go:117] "RemoveContainer" containerID="206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.631111 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5\": container with ID starting with 206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5 not found: ID does not exist" containerID="206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.631145 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5"} err="failed to get container status \"206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5\": rpc error: code = NotFound desc = could not find container \"206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5\": container with ID starting with 206dfcdf1d19d06730bf721a6db16558518a6f2cfbd17428547378c56d1948e5 not found: ID does not exist" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.631333 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.653139 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669200 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669745 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc03480-9a62-4730-a35b-7335deece98a" containerName="dnsmasq-dns" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669767 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc03480-9a62-4730-a35b-7335deece98a" containerName="dnsmasq-dns" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669783 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="probe" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669790 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="probe" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669812 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="cinder-scheduler" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669819 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="cinder-scheduler" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669833 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-api" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669839 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-api" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669846 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-httpd" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669851 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-httpd" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669869 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc03480-9a62-4730-a35b-7335deece98a" containerName="init" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669876 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc03480-9a62-4730-a35b-7335deece98a" containerName="init" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669886 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669892 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api" Nov 28 21:11:52 crc kubenswrapper[4957]: E1128 21:11:52.669902 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api-log" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.669908 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api-log" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670117 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670132 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-httpd" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670142 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="172c2933-99bf-430b-bae8-2e66b1d8c0c0" containerName="neutron-api" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670153 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="probe" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670163 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="620c4ab0-7219-4cca-893e-e1e26bc9a927" containerName="barbican-api-log" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670174 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="0021599e-b049-4194-b0ab-13434e5bba97" containerName="cinder-scheduler" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.670193 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc03480-9a62-4730-a35b-7335deece98a" containerName="dnsmasq-dns" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.671463 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.677062 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.700581 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.813148 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.813239 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-scripts\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.813290 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.813364 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.813387 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-config-data\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.813403 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6nrh\" (UniqueName: \"kubernetes.io/projected/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-kube-api-access-s6nrh\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.823461 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0021599e-b049-4194-b0ab-13434e5bba97" path="/var/lib/kubelet/pods/0021599e-b049-4194-b0ab-13434e5bba97/volumes" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.915491 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.915855 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-config-data\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.915884 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6nrh\" (UniqueName: \"kubernetes.io/projected/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-kube-api-access-s6nrh\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.916013 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.916073 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-scripts\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.915740 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.917677 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.921585 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-scripts\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.921655 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.922007 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-config-data\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.922583 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:52 crc kubenswrapper[4957]: I1128 21:11:52.933662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6nrh\" (UniqueName: \"kubernetes.io/projected/b490ddc2-ebb5-4cba-abea-a76c7e7a5172-kube-api-access-s6nrh\") pod \"cinder-scheduler-0\" (UID: \"b490ddc2-ebb5-4cba-abea-a76c7e7a5172\") " pod="openstack/cinder-scheduler-0" Nov 28 21:11:53 crc kubenswrapper[4957]: I1128 21:11:53.003344 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 21:11:53 crc kubenswrapper[4957]: W1128 21:11:53.491119 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb490ddc2_ebb5_4cba_abea_a76c7e7a5172.slice/crio-542b7684a82e8d1ae00d4028757206965bff828ab77e2d183f930eff2bc28cf7 WatchSource:0}: Error finding container 542b7684a82e8d1ae00d4028757206965bff828ab77e2d183f930eff2bc28cf7: Status 404 returned error can't find the container with id 542b7684a82e8d1ae00d4028757206965bff828ab77e2d183f930eff2bc28cf7 Nov 28 21:11:53 crc kubenswrapper[4957]: I1128 21:11:53.492759 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 21:11:53 crc kubenswrapper[4957]: I1128 21:11:53.594337 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b490ddc2-ebb5-4cba-abea-a76c7e7a5172","Type":"ContainerStarted","Data":"542b7684a82e8d1ae00d4028757206965bff828ab77e2d183f930eff2bc28cf7"} Nov 28 21:11:53 crc kubenswrapper[4957]: I1128 21:11:53.801039 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 21:11:54 crc kubenswrapper[4957]: I1128 21:11:54.610722 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b490ddc2-ebb5-4cba-abea-a76c7e7a5172","Type":"ContainerStarted","Data":"b89b9101139d2ffc765c305f196ab4deb865fe9355eed14c044a5753da102b67"} Nov 28 21:11:55 crc kubenswrapper[4957]: I1128 21:11:55.236141 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7859c96b89-s4dx8" Nov 28 21:11:55 crc kubenswrapper[4957]: I1128 21:11:55.623365 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b490ddc2-ebb5-4cba-abea-a76c7e7a5172","Type":"ContainerStarted","Data":"ff00621cffed8c87bc408efe9e33fea84cc2a5e2b400e5205a766765f6139eb9"} Nov 28 21:11:55 crc kubenswrapper[4957]: I1128 21:11:55.654828 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.654809529 podStartE2EDuration="3.654809529s" podCreationTimestamp="2025-11-28 21:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:11:55.650678097 +0000 UTC m=+1355.119326006" watchObservedRunningTime="2025-11-28 21:11:55.654809529 +0000 UTC m=+1355.123457428" Nov 28 21:11:58 crc kubenswrapper[4957]: I1128 21:11:58.003674 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.029383 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.031319 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.033536 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.034198 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.034377 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9k9sb" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.044054 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.154942 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.155032 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5vbs\" (UniqueName: \"kubernetes.io/projected/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-kube-api-access-d5vbs\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.155065 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-openstack-config\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.155311 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.256737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.256798 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.256846 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5vbs\" (UniqueName: \"kubernetes.io/projected/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-kube-api-access-d5vbs\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.256867 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-openstack-config\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.257682 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-openstack-config\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.264223 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.266579 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.278899 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5vbs\" (UniqueName: \"kubernetes.io/projected/9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687-kube-api-access-d5vbs\") pod \"openstackclient\" (UID: \"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687\") " pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.370706 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 21:11:59 crc kubenswrapper[4957]: I1128 21:11:59.840576 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 21:11:59 crc kubenswrapper[4957]: W1128 21:11:59.842497 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ccbd9c5_6f33_4f79_810e_5e9f6d2bc687.slice/crio-d387079cdb7347cb00a11b98ddbbcaa39b0e228475e50d033fb7d6442792e628 WatchSource:0}: Error finding container d387079cdb7347cb00a11b98ddbbcaa39b0e228475e50d033fb7d6442792e628: Status 404 returned error can't find the container with id d387079cdb7347cb00a11b98ddbbcaa39b0e228475e50d033fb7d6442792e628 Nov 28 21:12:00 crc kubenswrapper[4957]: I1128 21:12:00.692586 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687","Type":"ContainerStarted","Data":"d387079cdb7347cb00a11b98ddbbcaa39b0e228475e50d033fb7d6442792e628"} Nov 28 21:12:00 crc kubenswrapper[4957]: I1128 21:12:00.906657 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:12:00 crc kubenswrapper[4957]: I1128 21:12:00.906901 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-log" containerID="cri-o://e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4" gracePeriod=30 Nov 28 21:12:00 crc kubenswrapper[4957]: I1128 21:12:00.906989 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-httpd" containerID="cri-o://d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034" gracePeriod=30 Nov 28 21:12:01 crc kubenswrapper[4957]: I1128 21:12:01.705490 4957 generic.go:334] "Generic (PLEG): container finished" podID="33e364e1-d026-4648-b15c-8131dc797463" containerID="e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4" exitCode=143 Nov 28 21:12:01 crc kubenswrapper[4957]: I1128 21:12:01.705535 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33e364e1-d026-4648-b15c-8131dc797463","Type":"ContainerDied","Data":"e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4"} Nov 28 21:12:02 crc kubenswrapper[4957]: I1128 21:12:02.536322 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:12:02 crc kubenswrapper[4957]: I1128 21:12:02.536849 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-log" containerID="cri-o://31eeb8fa31a4eb4b006669c9c18aaa04124e0678df835b7e79586888328bb713" gracePeriod=30 Nov 28 21:12:02 crc kubenswrapper[4957]: I1128 21:12:02.537024 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-httpd" containerID="cri-o://5933658893a12aed819e1592d5775a5c22c4e2fe4131c71bf20c0e29d5068eae" gracePeriod=30 Nov 28 21:12:02 crc kubenswrapper[4957]: I1128 21:12:02.726271 4957 generic.go:334] "Generic (PLEG): container finished" podID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerID="31eeb8fa31a4eb4b006669c9c18aaa04124e0678df835b7e79586888328bb713" exitCode=143 Nov 28 21:12:02 crc kubenswrapper[4957]: I1128 21:12:02.726313 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f066d22c-10b0-4ae4-8e14-0e99502ff8d6","Type":"ContainerDied","Data":"31eeb8fa31a4eb4b006669c9c18aaa04124e0678df835b7e79586888328bb713"} Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.260237 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.444387 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6598dd477f-t4jws"] Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.447049 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.448531 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.449317 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.449587 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.460482 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6598dd477f-t4jws"] Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.563809 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xf5\" (UniqueName: \"kubernetes.io/projected/d3417d80-e650-4833-b935-a4cbacf23212-kube-api-access-f5xf5\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.563948 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d3417d80-e650-4833-b935-a4cbacf23212-etc-swift\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.563975 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-public-tls-certs\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.564014 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-config-data\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.564045 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3417d80-e650-4833-b935-a4cbacf23212-run-httpd\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.564102 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-combined-ca-bundle\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.575707 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3417d80-e650-4833-b935-a4cbacf23212-log-httpd\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.575969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-internal-tls-certs\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.677581 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5xf5\" (UniqueName: \"kubernetes.io/projected/d3417d80-e650-4833-b935-a4cbacf23212-kube-api-access-f5xf5\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.677984 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d3417d80-e650-4833-b935-a4cbacf23212-etc-swift\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.678088 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-public-tls-certs\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.678170 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-config-data\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.678328 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3417d80-e650-4833-b935-a4cbacf23212-run-httpd\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.678474 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-combined-ca-bundle\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.678600 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3417d80-e650-4833-b935-a4cbacf23212-log-httpd\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.678813 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-internal-tls-certs\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.679808 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3417d80-e650-4833-b935-a4cbacf23212-log-httpd\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.680374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3417d80-e650-4833-b935-a4cbacf23212-run-httpd\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.684903 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-combined-ca-bundle\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.691191 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-config-data\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.691414 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-public-tls-certs\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.696263 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3417d80-e650-4833-b935-a4cbacf23212-internal-tls-certs\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.698169 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d3417d80-e650-4833-b935-a4cbacf23212-etc-swift\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.699033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5xf5\" (UniqueName: \"kubernetes.io/projected/d3417d80-e650-4833-b935-a4cbacf23212-kube-api-access-f5xf5\") pod \"swift-proxy-6598dd477f-t4jws\" (UID: \"d3417d80-e650-4833-b935-a4cbacf23212\") " pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:03 crc kubenswrapper[4957]: I1128 21:12:03.787790 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.511534 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6598dd477f-t4jws"] Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.605127 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.605747 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-central-agent" containerID="cri-o://611056a1489b790eab2128b2a605128fa652d4d78f4feb31d03e12705c475766" gracePeriod=30 Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.605778 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="proxy-httpd" containerID="cri-o://fda487910037be59e3b0dc5854eddf183657d8eb3c5b21bf8974952d6e373a99" gracePeriod=30 Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.605858 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="sg-core" containerID="cri-o://584d339a5c1d9b6a518229dbb0c6c7d45bc10fd58da7ffbf64fb0cef8c466009" gracePeriod=30 Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.605903 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-notification-agent" containerID="cri-o://ec4907805aeb4c1bd543ebb32bf8826bc7268793da55f5c09744d2a7d028a825" gracePeriod=30 Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.676338 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.733228 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": read tcp 10.217.0.2:49250->10.217.0.199:3000: read: connection reset by peer" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.754368 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6598dd477f-t4jws" event={"ID":"d3417d80-e650-4833-b935-a4cbacf23212","Type":"ContainerStarted","Data":"9c2546971f7fea333eebd47b01d5df4b2735da7ba31c49d7f6d1afb005739218"} Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.773146 4957 generic.go:334] "Generic (PLEG): container finished" podID="621a0725-4c27-47d3-be24-89a00de305b0" containerID="584d339a5c1d9b6a518229dbb0c6c7d45bc10fd58da7ffbf64fb0cef8c466009" exitCode=2 Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.773220 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerDied","Data":"584d339a5c1d9b6a518229dbb0c6c7d45bc10fd58da7ffbf64fb0cef8c466009"} Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.801519 4957 generic.go:334] "Generic (PLEG): container finished" podID="33e364e1-d026-4648-b15c-8131dc797463" containerID="d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034" exitCode=0 Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.801567 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33e364e1-d026-4648-b15c-8131dc797463","Type":"ContainerDied","Data":"d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034"} Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.801596 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33e364e1-d026-4648-b15c-8131dc797463","Type":"ContainerDied","Data":"fae3f56e8c5315cbb94559d2793f822e84e023ce6e98010790c705d7dc808867"} Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.801615 4957 scope.go:117] "RemoveContainer" containerID="d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.801782 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814403 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v69m5\" (UniqueName: \"kubernetes.io/projected/33e364e1-d026-4648-b15c-8131dc797463-kube-api-access-v69m5\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814529 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-public-tls-certs\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814555 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-httpd-run\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814591 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814663 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-logs\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814786 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-combined-ca-bundle\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814809 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-config-data\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.814907 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-scripts\") pod \"33e364e1-d026-4648-b15c-8131dc797463\" (UID: \"33e364e1-d026-4648-b15c-8131dc797463\") " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.821506 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.824296 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-scripts" (OuterVolumeSpecName: "scripts") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.824932 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-logs" (OuterVolumeSpecName: "logs") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.826832 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.841851 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e364e1-d026-4648-b15c-8131dc797463-kube-api-access-v69m5" (OuterVolumeSpecName: "kube-api-access-v69m5") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "kube-api-access-v69m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.910278 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-config-data" (OuterVolumeSpecName: "config-data") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.917575 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.917609 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.917618 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v69m5\" (UniqueName: \"kubernetes.io/projected/33e364e1-d026-4648-b15c-8131dc797463-kube-api-access-v69m5\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.917629 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.917648 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.917656 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e364e1-d026-4648-b15c-8131dc797463-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.927803 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.945183 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33e364e1-d026-4648-b15c-8131dc797463" (UID: "33e364e1-d026-4648-b15c-8131dc797463"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:04 crc kubenswrapper[4957]: I1128 21:12:04.959541 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.020291 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.020322 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.020332 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e364e1-d026-4648-b15c-8131dc797463-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.027913 4957 scope.go:117] "RemoveContainer" containerID="e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.061091 4957 scope.go:117] "RemoveContainer" containerID="d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034" Nov 28 21:12:05 crc kubenswrapper[4957]: E1128 21:12:05.061759 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034\": container with ID starting with d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034 not found: ID does not exist" containerID="d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.061817 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034"} err="failed to get container status \"d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034\": rpc error: code = NotFound desc = could not find container \"d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034\": container with ID starting with d3c1b1935aa29f312b7abff4187f6f2e4d1ac4e22bb406ce66fca2e195363034 not found: ID does not exist" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.061843 4957 scope.go:117] "RemoveContainer" containerID="e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4" Nov 28 21:12:05 crc kubenswrapper[4957]: E1128 21:12:05.064041 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4\": container with ID starting with e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4 not found: ID does not exist" containerID="e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.064079 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4"} err="failed to get container status \"e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4\": rpc error: code = NotFound desc = could not find container \"e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4\": container with ID starting with e9cc86281cc88dbb351a990d70e84a728f1641851a9d74e5cc12500edde622f4 not found: ID does not exist" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.162935 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.182279 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.192126 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:12:05 crc kubenswrapper[4957]: E1128 21:12:05.192522 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-httpd" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.192539 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-httpd" Nov 28 21:12:05 crc kubenswrapper[4957]: E1128 21:12:05.192591 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-log" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.192599 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-log" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.193009 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-httpd" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.193039 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e364e1-d026-4648-b15c-8131dc797463" containerName="glance-log" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.194191 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.198479 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.198494 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.202659 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327401 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327445 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327550 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8h4x\" (UniqueName: \"kubernetes.io/projected/29dab28a-afdf-4c02-a83a-f43c408b24ee-kube-api-access-b8h4x\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327577 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29dab28a-afdf-4c02-a83a-f43c408b24ee-logs\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327625 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327644 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29dab28a-afdf-4c02-a83a-f43c408b24ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327665 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.327703 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.429865 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8h4x\" (UniqueName: \"kubernetes.io/projected/29dab28a-afdf-4c02-a83a-f43c408b24ee-kube-api-access-b8h4x\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.429925 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29dab28a-afdf-4c02-a83a-f43c408b24ee-logs\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.429996 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430024 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29dab28a-afdf-4c02-a83a-f43c408b24ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430056 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430114 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430164 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430386 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430570 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29dab28a-afdf-4c02-a83a-f43c408b24ee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.430557 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29dab28a-afdf-4c02-a83a-f43c408b24ee-logs\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.436366 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.438004 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-scripts\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.444001 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.450267 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29dab28a-afdf-4c02-a83a-f43c408b24ee-config-data\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.464181 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8h4x\" (UniqueName: \"kubernetes.io/projected/29dab28a-afdf-4c02-a83a-f43c408b24ee-kube-api-access-b8h4x\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.537723 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29dab28a-afdf-4c02-a83a-f43c408b24ee\") " pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.815691 4957 generic.go:334] "Generic (PLEG): container finished" podID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerID="5933658893a12aed819e1592d5775a5c22c4e2fe4131c71bf20c0e29d5068eae" exitCode=0 Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.815746 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f066d22c-10b0-4ae4-8e14-0e99502ff8d6","Type":"ContainerDied","Data":"5933658893a12aed819e1592d5775a5c22c4e2fe4131c71bf20c0e29d5068eae"} Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.817744 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6598dd477f-t4jws" event={"ID":"d3417d80-e650-4833-b935-a4cbacf23212","Type":"ContainerStarted","Data":"d4b2d8fdcc72062243ffad29d89c1121e08bf92f0afa5bd9dac4c0a1d411f588"} Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.817770 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6598dd477f-t4jws" event={"ID":"d3417d80-e650-4833-b935-a4cbacf23212","Type":"ContainerStarted","Data":"d3dc16a2a0dd1791799b8cbb6e6766c25f4ca8bc22a5258e9df4ff85274b7190"} Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.819504 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.819858 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.832067 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.844446 4957 generic.go:334] "Generic (PLEG): container finished" podID="621a0725-4c27-47d3-be24-89a00de305b0" containerID="fda487910037be59e3b0dc5854eddf183657d8eb3c5b21bf8974952d6e373a99" exitCode=0 Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.844476 4957 generic.go:334] "Generic (PLEG): container finished" podID="621a0725-4c27-47d3-be24-89a00de305b0" containerID="611056a1489b790eab2128b2a605128fa652d4d78f4feb31d03e12705c475766" exitCode=0 Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.844498 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerDied","Data":"fda487910037be59e3b0dc5854eddf183657d8eb3c5b21bf8974952d6e373a99"} Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.844521 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerDied","Data":"611056a1489b790eab2128b2a605128fa652d4d78f4feb31d03e12705c475766"} Nov 28 21:12:05 crc kubenswrapper[4957]: I1128 21:12:05.855906 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6598dd477f-t4jws" podStartSLOduration=2.855886931 podStartE2EDuration="2.855886931s" podCreationTimestamp="2025-11-28 21:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:05.8391691 +0000 UTC m=+1365.307817009" watchObservedRunningTime="2025-11-28 21:12:05.855886931 +0000 UTC m=+1365.324534840" Nov 28 21:12:06 crc kubenswrapper[4957]: I1128 21:12:06.827157 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e364e1-d026-4648-b15c-8131dc797463" path="/var/lib/kubelet/pods/33e364e1-d026-4648-b15c-8131dc797463/volumes" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.499866 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tfscq"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.501401 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.522828 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tfscq"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.597804 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q22tr\" (UniqueName: \"kubernetes.io/projected/d9ea8814-97e9-483f-b18b-152cf55db66e-kube-api-access-q22tr\") pod \"nova-api-db-create-tfscq\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.597905 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9ea8814-97e9-483f-b18b-152cf55db66e-operator-scripts\") pod \"nova-api-db-create-tfscq\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.701314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q22tr\" (UniqueName: \"kubernetes.io/projected/d9ea8814-97e9-483f-b18b-152cf55db66e-kube-api-access-q22tr\") pod \"nova-api-db-create-tfscq\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.701437 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9ea8814-97e9-483f-b18b-152cf55db66e-operator-scripts\") pod \"nova-api-db-create-tfscq\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.704004 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9ea8814-97e9-483f-b18b-152cf55db66e-operator-scripts\") pod \"nova-api-db-create-tfscq\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.718254 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-28e1-account-create-update-r5nnk"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.723115 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": dial tcp 10.217.0.199:3000: connect: connection refused" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.723842 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.729248 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q22tr\" (UniqueName: \"kubernetes.io/projected/d9ea8814-97e9-483f-b18b-152cf55db66e-kube-api-access-q22tr\") pod \"nova-api-db-create-tfscq\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.733461 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.770245 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-28e1-account-create-update-r5nnk"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.800908 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-sz4nq"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.804998 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28598bcc-eeb8-4f16-a9f3-504804e6dd44-operator-scripts\") pod \"nova-api-28e1-account-create-update-r5nnk\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.805064 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xcb\" (UniqueName: \"kubernetes.io/projected/28598bcc-eeb8-4f16-a9f3-504804e6dd44-kube-api-access-w8xcb\") pod \"nova-api-28e1-account-create-update-r5nnk\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.805538 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.812603 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sz4nq"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.824365 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.905219 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m5vbz"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.906631 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.906730 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-operator-scripts\") pod \"nova-cell0-db-create-sz4nq\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.906809 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcrs\" (UniqueName: \"kubernetes.io/projected/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-kube-api-access-rxcrs\") pod \"nova-cell0-db-create-sz4nq\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.906961 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28598bcc-eeb8-4f16-a9f3-504804e6dd44-operator-scripts\") pod \"nova-api-28e1-account-create-update-r5nnk\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.907027 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8xcb\" (UniqueName: \"kubernetes.io/projected/28598bcc-eeb8-4f16-a9f3-504804e6dd44-kube-api-access-w8xcb\") pod \"nova-api-28e1-account-create-update-r5nnk\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.908914 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28598bcc-eeb8-4f16-a9f3-504804e6dd44-operator-scripts\") pod \"nova-api-28e1-account-create-update-r5nnk\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.927580 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9e02-account-create-update-b59kd"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.929393 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.931166 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8xcb\" (UniqueName: \"kubernetes.io/projected/28598bcc-eeb8-4f16-a9f3-504804e6dd44-kube-api-access-w8xcb\") pod \"nova-api-28e1-account-create-update-r5nnk\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.931325 4957 generic.go:334] "Generic (PLEG): container finished" podID="621a0725-4c27-47d3-be24-89a00de305b0" containerID="ec4907805aeb4c1bd543ebb32bf8826bc7268793da55f5c09744d2a7d028a825" exitCode=0 Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.931359 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerDied","Data":"ec4907805aeb4c1bd543ebb32bf8826bc7268793da55f5c09744d2a7d028a825"} Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.931761 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.952736 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m5vbz"] Nov 28 21:12:07 crc kubenswrapper[4957]: I1128 21:12:07.981428 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9e02-account-create-update-b59kd"] Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.010509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skws\" (UniqueName: \"kubernetes.io/projected/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-kube-api-access-2skws\") pod \"nova-cell1-db-create-m5vbz\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.010658 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689e0a10-8474-4f07-b2c3-35cb230e2803-operator-scripts\") pod \"nova-cell0-9e02-account-create-update-b59kd\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.010688 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-operator-scripts\") pod \"nova-cell1-db-create-m5vbz\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.010743 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-operator-scripts\") pod \"nova-cell0-db-create-sz4nq\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.010777 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75tkj\" (UniqueName: \"kubernetes.io/projected/689e0a10-8474-4f07-b2c3-35cb230e2803-kube-api-access-75tkj\") pod \"nova-cell0-9e02-account-create-update-b59kd\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.010810 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcrs\" (UniqueName: \"kubernetes.io/projected/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-kube-api-access-rxcrs\") pod \"nova-cell0-db-create-sz4nq\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.012025 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-operator-scripts\") pod \"nova-cell0-db-create-sz4nq\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.040851 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcrs\" (UniqueName: \"kubernetes.io/projected/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-kube-api-access-rxcrs\") pod \"nova-cell0-db-create-sz4nq\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.100715 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5199-account-create-update-clxd9"] Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.102618 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.107654 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.114706 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5199-account-create-update-clxd9"] Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.129591 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2skws\" (UniqueName: \"kubernetes.io/projected/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-kube-api-access-2skws\") pod \"nova-cell1-db-create-m5vbz\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.129787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689e0a10-8474-4f07-b2c3-35cb230e2803-operator-scripts\") pod \"nova-cell0-9e02-account-create-update-b59kd\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.129851 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-operator-scripts\") pod \"nova-cell1-db-create-m5vbz\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.129979 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75tkj\" (UniqueName: \"kubernetes.io/projected/689e0a10-8474-4f07-b2c3-35cb230e2803-kube-api-access-75tkj\") pod \"nova-cell0-9e02-account-create-update-b59kd\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.130849 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-operator-scripts\") pod \"nova-cell1-db-create-m5vbz\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.142124 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689e0a10-8474-4f07-b2c3-35cb230e2803-operator-scripts\") pod \"nova-cell0-9e02-account-create-update-b59kd\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.154671 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2skws\" (UniqueName: \"kubernetes.io/projected/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-kube-api-access-2skws\") pod \"nova-cell1-db-create-m5vbz\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.166646 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75tkj\" (UniqueName: \"kubernetes.io/projected/689e0a10-8474-4f07-b2c3-35cb230e2803-kube-api-access-75tkj\") pod \"nova-cell0-9e02-account-create-update-b59kd\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.190903 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.206118 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.232048 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.237791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26dceba-3629-4241-84b1-72015dff8552-operator-scripts\") pod \"nova-cell1-5199-account-create-update-clxd9\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.238026 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whgt5\" (UniqueName: \"kubernetes.io/projected/c26dceba-3629-4241-84b1-72015dff8552-kube-api-access-whgt5\") pod \"nova-cell1-5199-account-create-update-clxd9\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.310006 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.340195 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whgt5\" (UniqueName: \"kubernetes.io/projected/c26dceba-3629-4241-84b1-72015dff8552-kube-api-access-whgt5\") pod \"nova-cell1-5199-account-create-update-clxd9\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.340339 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26dceba-3629-4241-84b1-72015dff8552-operator-scripts\") pod \"nova-cell1-5199-account-create-update-clxd9\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.340929 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26dceba-3629-4241-84b1-72015dff8552-operator-scripts\") pod \"nova-cell1-5199-account-create-update-clxd9\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.356016 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whgt5\" (UniqueName: \"kubernetes.io/projected/c26dceba-3629-4241-84b1-72015dff8552-kube-api-access-whgt5\") pod \"nova-cell1-5199-account-create-update-clxd9\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:08 crc kubenswrapper[4957]: I1128 21:12:08.429584 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.739065 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.853958 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-combined-ca-bundle\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.854033 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-config-data\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.854062 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-scripts\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.854166 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-sg-core-conf-yaml\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.854200 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-run-httpd\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.854320 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-log-httpd\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.854359 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfpq\" (UniqueName: \"kubernetes.io/projected/621a0725-4c27-47d3-be24-89a00de305b0-kube-api-access-6dfpq\") pod \"621a0725-4c27-47d3-be24-89a00de305b0\" (UID: \"621a0725-4c27-47d3-be24-89a00de305b0\") " Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.855098 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.855189 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.862694 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621a0725-4c27-47d3-be24-89a00de305b0-kube-api-access-6dfpq" (OuterVolumeSpecName: "kube-api-access-6dfpq") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "kube-api-access-6dfpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.862866 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-scripts" (OuterVolumeSpecName: "scripts") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.921345 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.957005 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.957046 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfpq\" (UniqueName: \"kubernetes.io/projected/621a0725-4c27-47d3-be24-89a00de305b0-kube-api-access-6dfpq\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.957063 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.957077 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.957092 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/621a0725-4c27-47d3-be24-89a00de305b0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:12 crc kubenswrapper[4957]: I1128 21:12:12.971425 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.006528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687","Type":"ContainerStarted","Data":"d5784937f692c9b48751ab4dc7cb37b1996f02056d29b3d02a6582ba46f9b78e"} Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.012299 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"621a0725-4c27-47d3-be24-89a00de305b0","Type":"ContainerDied","Data":"3fcf0bc871fe4cd8836233e1e2469340c986566e139daa76f1ed64c880e70a4e"} Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.012339 4957 scope.go:117] "RemoveContainer" containerID="fda487910037be59e3b0dc5854eddf183657d8eb3c5b21bf8974952d6e373a99" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.012509 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.017720 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-config-data" (OuterVolumeSpecName: "config-data") pod "621a0725-4c27-47d3-be24-89a00de305b0" (UID: "621a0725-4c27-47d3-be24-89a00de305b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.025031 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.589090234 podStartE2EDuration="15.02501425s" podCreationTimestamp="2025-11-28 21:11:58 +0000 UTC" firstStartedPulling="2025-11-28 21:11:59.845025418 +0000 UTC m=+1359.313673327" lastFinishedPulling="2025-11-28 21:12:12.280949434 +0000 UTC m=+1371.749597343" observedRunningTime="2025-11-28 21:12:13.024817276 +0000 UTC m=+1372.493465185" watchObservedRunningTime="2025-11-28 21:12:13.02501425 +0000 UTC m=+1372.493662159" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.059853 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.059895 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621a0725-4c27-47d3-be24-89a00de305b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.203159 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.227310 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tfscq"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.229032 4957 scope.go:117] "RemoveContainer" containerID="584d339a5c1d9b6a518229dbb0c6c7d45bc10fd58da7ffbf64fb0cef8c466009" Nov 28 21:12:13 crc kubenswrapper[4957]: W1128 21:12:13.236766 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9ea8814_97e9_483f_b18b_152cf55db66e.slice/crio-1c5c942a055f18563a86d3e35ac566ab6ff4d926e5bc13db4d0d0a0cbc1cfbcc WatchSource:0}: Error finding container 1c5c942a055f18563a86d3e35ac566ab6ff4d926e5bc13db4d0d0a0cbc1cfbcc: Status 404 returned error can't find the container with id 1c5c942a055f18563a86d3e35ac566ab6ff4d926e5bc13db4d0d0a0cbc1cfbcc Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266087 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-config-data\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266527 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-internal-tls-certs\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266558 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-combined-ca-bundle\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266581 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-scripts\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266648 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42fml\" (UniqueName: \"kubernetes.io/projected/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-kube-api-access-42fml\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266705 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-logs\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266797 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-httpd-run\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.266816 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\" (UID: \"f066d22c-10b0-4ae4-8e14-0e99502ff8d6\") " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.272463 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.272870 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-logs" (OuterVolumeSpecName: "logs") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.274426 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-kube-api-access-42fml" (OuterVolumeSpecName: "kube-api-access-42fml") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "kube-api-access-42fml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.287271 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.302730 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-scripts" (OuterVolumeSpecName: "scripts") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.373679 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.373709 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42fml\" (UniqueName: \"kubernetes.io/projected/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-kube-api-access-42fml\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.373719 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.373728 4957 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.373749 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.377419 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.412316 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-config-data" (OuterVolumeSpecName: "config-data") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.431362 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.439856 4957 scope.go:117] "RemoveContainer" containerID="ec4907805aeb4c1bd543ebb32bf8826bc7268793da55f5c09744d2a7d028a825" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.460557 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f066d22c-10b0-4ae4-8e14-0e99502ff8d6" (UID: "f066d22c-10b0-4ae4-8e14-0e99502ff8d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.477891 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.477925 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.477945 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.477956 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f066d22c-10b0-4ae4-8e14-0e99502ff8d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.480279 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.487459 4957 scope.go:117] "RemoveContainer" containerID="611056a1489b790eab2128b2a605128fa652d4d78f4feb31d03e12705c475766" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.492839 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507303 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:13 crc kubenswrapper[4957]: E1128 21:12:13.507775 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="proxy-httpd" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507788 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="proxy-httpd" Nov 28 21:12:13 crc kubenswrapper[4957]: E1128 21:12:13.507810 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-notification-agent" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507816 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-notification-agent" Nov 28 21:12:13 crc kubenswrapper[4957]: E1128 21:12:13.507838 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-httpd" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507845 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-httpd" Nov 28 21:12:13 crc kubenswrapper[4957]: E1128 21:12:13.507875 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-log" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507881 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-log" Nov 28 21:12:13 crc kubenswrapper[4957]: E1128 21:12:13.507897 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="sg-core" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507905 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="sg-core" Nov 28 21:12:13 crc kubenswrapper[4957]: E1128 21:12:13.507915 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-central-agent" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.507921 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-central-agent" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.508128 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="sg-core" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.508147 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-central-agent" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.508155 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="ceilometer-notification-agent" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.508162 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="621a0725-4c27-47d3-be24-89a00de305b0" containerName="proxy-httpd" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.508183 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-log" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.508196 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" containerName="glance-httpd" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.510121 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.518854 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.522334 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.522460 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.580601 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.580774 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-scripts\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.581032 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-config-data\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.581171 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.581245 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlfm\" (UniqueName: \"kubernetes.io/projected/9ed70786-2196-4a33-9b79-b9cc16ee171a-kube-api-access-prlfm\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.581271 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.581295 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.668332 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5199-account-create-update-clxd9"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684632 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-scripts\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-config-data\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684878 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prlfm\" (UniqueName: \"kubernetes.io/projected/9ed70786-2196-4a33-9b79-b9cc16ee171a-kube-api-access-prlfm\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684903 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684933 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.684964 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.685611 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.687071 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.690509 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.690668 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-28e1-account-create-update-r5nnk"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.692099 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-config-data\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.696889 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.697461 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-scripts\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.703940 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prlfm\" (UniqueName: \"kubernetes.io/projected/9ed70786-2196-4a33-9b79-b9cc16ee171a-kube-api-access-prlfm\") pod \"ceilometer-0\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " pod="openstack/ceilometer-0" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.706008 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9e02-account-create-update-b59kd"] Nov 28 21:12:13 crc kubenswrapper[4957]: W1128 21:12:13.732696 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c4efc2e_f4e9_483b_ba4f_008ae1d2ec76.slice/crio-19400d5488e464e0138b5f860f3ccf0a641d500da582df05ac16f65e9b7c8a3f WatchSource:0}: Error finding container 19400d5488e464e0138b5f860f3ccf0a641d500da582df05ac16f65e9b7c8a3f: Status 404 returned error can't find the container with id 19400d5488e464e0138b5f860f3ccf0a641d500da582df05ac16f65e9b7c8a3f Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.732850 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m5vbz"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.752306 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sz4nq"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.768660 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.801192 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.807247 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6598dd477f-t4jws" Nov 28 21:12:13 crc kubenswrapper[4957]: I1128 21:12:13.849787 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.039369 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5vbz" event={"ID":"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7","Type":"ContainerStarted","Data":"62d1bf80d033ad4c80ba636337dbdeaa42a521a419e12f3e3665b2b10e8df35e"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.041672 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29dab28a-afdf-4c02-a83a-f43c408b24ee","Type":"ContainerStarted","Data":"fa20d4d036b4eea0831c72dc47d95889e799444756b2bbf4795a6c3daa0d3984"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.046784 4957 generic.go:334] "Generic (PLEG): container finished" podID="d9ea8814-97e9-483f-b18b-152cf55db66e" containerID="aba8946376771fb205491deba71aa2905b4f707705a4bd4701e265e22036c32c" exitCode=0 Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.046855 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tfscq" event={"ID":"d9ea8814-97e9-483f-b18b-152cf55db66e","Type":"ContainerDied","Data":"aba8946376771fb205491deba71aa2905b4f707705a4bd4701e265e22036c32c"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.046878 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tfscq" event={"ID":"d9ea8814-97e9-483f-b18b-152cf55db66e","Type":"ContainerStarted","Data":"1c5c942a055f18563a86d3e35ac566ab6ff4d926e5bc13db4d0d0a0cbc1cfbcc"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.049567 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" event={"ID":"689e0a10-8474-4f07-b2c3-35cb230e2803","Type":"ContainerStarted","Data":"cf392052458ee60351b70bfd8595ac7dcde4f9489732f56ed744256f7d2c11a7"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.065956 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5199-account-create-update-clxd9" event={"ID":"c26dceba-3629-4241-84b1-72015dff8552","Type":"ContainerStarted","Data":"b828127f85e721b2e997d084634e57cb90a4ea37e44735840cc2293cb6527e33"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.066022 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5199-account-create-update-clxd9" event={"ID":"c26dceba-3629-4241-84b1-72015dff8552","Type":"ContainerStarted","Data":"5c5d30d030dfeacb644d21fadc2a88ebaeae3207728120427461cb29c1bb8d14"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.079870 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f066d22c-10b0-4ae4-8e14-0e99502ff8d6","Type":"ContainerDied","Data":"19a0fcb575d82ad4795cf4395b1482b0e29995fb9515b8dfbe8ffeebbc49f3f3"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.079946 4957 scope.go:117] "RemoveContainer" containerID="5933658893a12aed819e1592d5775a5c22c4e2fe4131c71bf20c0e29d5068eae" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.080063 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.100678 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28e1-account-create-update-r5nnk" event={"ID":"28598bcc-eeb8-4f16-a9f3-504804e6dd44","Type":"ContainerStarted","Data":"90ea345ca793e7a4d588f26feb7a06158e8fb5b254b57f6bad1d88443cb5d780"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.106748 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5199-account-create-update-clxd9" podStartSLOduration=6.106721409 podStartE2EDuration="6.106721409s" podCreationTimestamp="2025-11-28 21:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:14.091420703 +0000 UTC m=+1373.560068612" watchObservedRunningTime="2025-11-28 21:12:14.106721409 +0000 UTC m=+1373.575369318" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.116411 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sz4nq" event={"ID":"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76","Type":"ContainerStarted","Data":"19400d5488e464e0138b5f860f3ccf0a641d500da582df05ac16f65e9b7c8a3f"} Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.250151 4957 scope.go:117] "RemoveContainer" containerID="31eeb8fa31a4eb4b006669c9c18aaa04124e0678df835b7e79586888328bb713" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.262217 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.301092 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.306260 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.309193 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.311647 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.314822 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.337757 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.420508 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d417c8-e8c3-491b-84e9-0db9f9a10038-logs\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.420932 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x42j\" (UniqueName: \"kubernetes.io/projected/34d417c8-e8c3-491b-84e9-0db9f9a10038-kube-api-access-2x42j\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.420957 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.420983 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.421022 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.421051 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34d417c8-e8c3-491b-84e9-0db9f9a10038-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.421069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.421086 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523275 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d417c8-e8c3-491b-84e9-0db9f9a10038-logs\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523449 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x42j\" (UniqueName: \"kubernetes.io/projected/34d417c8-e8c3-491b-84e9-0db9f9a10038-kube-api-access-2x42j\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523486 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523524 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523629 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34d417c8-e8c3-491b-84e9-0db9f9a10038-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.523687 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.526718 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34d417c8-e8c3-491b-84e9-0db9f9a10038-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.529379 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.531872 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34d417c8-e8c3-491b-84e9-0db9f9a10038-logs\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.535338 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.536167 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-config-data\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.548466 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x42j\" (UniqueName: \"kubernetes.io/projected/34d417c8-e8c3-491b-84e9-0db9f9a10038-kube-api-access-2x42j\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.571690 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-scripts\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.573735 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.576824 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.577499 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34d417c8-e8c3-491b-84e9-0db9f9a10038-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"34d417c8-e8c3-491b-84e9-0db9f9a10038\") " pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.635863 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.838285 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621a0725-4c27-47d3-be24-89a00de305b0" path="/var/lib/kubelet/pods/621a0725-4c27-47d3-be24-89a00de305b0/volumes" Nov 28 21:12:14 crc kubenswrapper[4957]: I1128 21:12:14.840530 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f066d22c-10b0-4ae4-8e14-0e99502ff8d6" path="/var/lib/kubelet/pods/f066d22c-10b0-4ae4-8e14-0e99502ff8d6/volumes" Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.151150 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerStarted","Data":"c2d80b252a90297e56b66ca37b389f7c4009559d97f1619bbab59186f26c09e1"} Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.153043 4957 generic.go:334] "Generic (PLEG): container finished" podID="c26dceba-3629-4241-84b1-72015dff8552" containerID="b828127f85e721b2e997d084634e57cb90a4ea37e44735840cc2293cb6527e33" exitCode=0 Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.153536 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5199-account-create-update-clxd9" event={"ID":"c26dceba-3629-4241-84b1-72015dff8552","Type":"ContainerDied","Data":"b828127f85e721b2e997d084634e57cb90a4ea37e44735840cc2293cb6527e33"} Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.171965 4957 generic.go:334] "Generic (PLEG): container finished" podID="b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" containerID="1f952ee909f8bb6639a0e21a3735444bad908e104d4d33503bb4ec5fc5f84322" exitCode=0 Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.172061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5vbz" event={"ID":"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7","Type":"ContainerDied","Data":"1f952ee909f8bb6639a0e21a3735444bad908e104d4d33503bb4ec5fc5f84322"} Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.199481 4957 generic.go:334] "Generic (PLEG): container finished" podID="28598bcc-eeb8-4f16-a9f3-504804e6dd44" containerID="6b2caa21d19f776b9c67d7e563c8117825da567de8aa350a872420c27b7694fb" exitCode=0 Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.199585 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28e1-account-create-update-r5nnk" event={"ID":"28598bcc-eeb8-4f16-a9f3-504804e6dd44","Type":"ContainerDied","Data":"6b2caa21d19f776b9c67d7e563c8117825da567de8aa350a872420c27b7694fb"} Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.231958 4957 generic.go:334] "Generic (PLEG): container finished" podID="6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" containerID="d5f1eeb4f60198f011e8a5283bf531ae63010c246587dca64c1f8be9d846d07e" exitCode=0 Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.232399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sz4nq" event={"ID":"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76","Type":"ContainerDied","Data":"d5f1eeb4f60198f011e8a5283bf531ae63010c246587dca64c1f8be9d846d07e"} Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.269720 4957 generic.go:334] "Generic (PLEG): container finished" podID="689e0a10-8474-4f07-b2c3-35cb230e2803" containerID="10dd1628af95e5c2b6a1ebfcfafff0dc237b61fddb3bfca10fecefa8e6d6fc08" exitCode=0 Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.269966 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" event={"ID":"689e0a10-8474-4f07-b2c3-35cb230e2803","Type":"ContainerDied","Data":"10dd1628af95e5c2b6a1ebfcfafff0dc237b61fddb3bfca10fecefa8e6d6fc08"} Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.425004 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.816256 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.878070 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9ea8814-97e9-483f-b18b-152cf55db66e-operator-scripts\") pod \"d9ea8814-97e9-483f-b18b-152cf55db66e\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.878173 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q22tr\" (UniqueName: \"kubernetes.io/projected/d9ea8814-97e9-483f-b18b-152cf55db66e-kube-api-access-q22tr\") pod \"d9ea8814-97e9-483f-b18b-152cf55db66e\" (UID: \"d9ea8814-97e9-483f-b18b-152cf55db66e\") " Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.880230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ea8814-97e9-483f-b18b-152cf55db66e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9ea8814-97e9-483f-b18b-152cf55db66e" (UID: "d9ea8814-97e9-483f-b18b-152cf55db66e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.895547 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ea8814-97e9-483f-b18b-152cf55db66e-kube-api-access-q22tr" (OuterVolumeSpecName: "kube-api-access-q22tr") pod "d9ea8814-97e9-483f-b18b-152cf55db66e" (UID: "d9ea8814-97e9-483f-b18b-152cf55db66e"). InnerVolumeSpecName "kube-api-access-q22tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.980773 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9ea8814-97e9-483f-b18b-152cf55db66e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:15 crc kubenswrapper[4957]: I1128 21:12:15.980815 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q22tr\" (UniqueName: \"kubernetes.io/projected/d9ea8814-97e9-483f-b18b-152cf55db66e-kube-api-access-q22tr\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.308656 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29dab28a-afdf-4c02-a83a-f43c408b24ee","Type":"ContainerStarted","Data":"fcca3b99d234f734bcd369d70f5cd9e0044cc5a579b00905a0137a6d2910a82f"} Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.308972 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29dab28a-afdf-4c02-a83a-f43c408b24ee","Type":"ContainerStarted","Data":"78c4224531dbb3ba40f62f8af3ba1eb02e3241d97733d14e6f9a01ff1ddfe6ce"} Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.335178 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34d417c8-e8c3-491b-84e9-0db9f9a10038","Type":"ContainerStarted","Data":"dfd80087d3c57b1cdd4216c23ee6fc0da1fdf5f91f22348cfadf546918781339"} Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.335237 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34d417c8-e8c3-491b-84e9-0db9f9a10038","Type":"ContainerStarted","Data":"08644594136bea03defbd2c3007d9339d7aa44d363030d71c39e376df3e15b53"} Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.337937 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.337919905 podStartE2EDuration="11.337919905s" podCreationTimestamp="2025-11-28 21:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:16.334004349 +0000 UTC m=+1375.802652258" watchObservedRunningTime="2025-11-28 21:12:16.337919905 +0000 UTC m=+1375.806567804" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.356492 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tfscq" event={"ID":"d9ea8814-97e9-483f-b18b-152cf55db66e","Type":"ContainerDied","Data":"1c5c942a055f18563a86d3e35ac566ab6ff4d926e5bc13db4d0d0a0cbc1cfbcc"} Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.356532 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5c942a055f18563a86d3e35ac566ab6ff4d926e5bc13db4d0d0a0cbc1cfbcc" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.356598 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tfscq" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.368878 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerStarted","Data":"5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5"} Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.468336 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-9c7ff8b44-mp2wj"] Nov 28 21:12:16 crc kubenswrapper[4957]: E1128 21:12:16.469270 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ea8814-97e9-483f-b18b-152cf55db66e" containerName="mariadb-database-create" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.469291 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ea8814-97e9-483f-b18b-152cf55db66e" containerName="mariadb-database-create" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.469489 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ea8814-97e9-483f-b18b-152cf55db66e" containerName="mariadb-database-create" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.470292 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.473021 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.473419 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.473696 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-kbz6c" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.497793 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9c7ff8b44-mp2wj"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.502183 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.502256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4nz\" (UniqueName: \"kubernetes.io/projected/2b5d38cb-30e7-40d2-9c78-a882bd723332-kube-api-access-rz4nz\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.502325 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-combined-ca-bundle\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.502350 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data-custom\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.607872 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.607929 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz4nz\" (UniqueName: \"kubernetes.io/projected/2b5d38cb-30e7-40d2-9c78-a882bd723332-kube-api-access-rz4nz\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.607993 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-combined-ca-bundle\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.608021 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data-custom\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.611323 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-hxz5w"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.613318 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.621996 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data-custom\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.622221 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-combined-ca-bundle\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.630524 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.665653 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-hxz5w"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.682096 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz4nz\" (UniqueName: \"kubernetes.io/projected/2b5d38cb-30e7-40d2-9c78-a882bd723332-kube-api-access-rz4nz\") pod \"heat-engine-9c7ff8b44-mp2wj\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.709903 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.709977 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkf84\" (UniqueName: \"kubernetes.io/projected/a57e309f-cfbf-47ac-8f73-9276f89ce36b-kube-api-access-pkf84\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.710089 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-config\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.710133 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.710158 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.710186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.738060 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6c99bbb6f7-ntvsj"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.739707 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.741677 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.786264 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c99bbb6f7-ntvsj"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.831650 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6795cdbb7b-2qqp4"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.833641 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-config\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.833714 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834226 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834261 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834327 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blrsc\" (UniqueName: \"kubernetes.io/projected/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-kube-api-access-blrsc\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834391 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-combined-ca-bundle\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834409 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834514 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834560 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkf84\" (UniqueName: \"kubernetes.io/projected/a57e309f-cfbf-47ac-8f73-9276f89ce36b-kube-api-access-pkf84\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.834580 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data-custom\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.836258 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.836816 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.837052 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.837454 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6795cdbb7b-2qqp4"] Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.837536 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.839665 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.840561 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-config\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.841420 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.847679 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.869643 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkf84\" (UniqueName: \"kubernetes.io/projected/a57e309f-cfbf-47ac-8f73-9276f89ce36b-kube-api-access-pkf84\") pod \"dnsmasq-dns-7756b9d78c-hxz5w\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.936683 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data-custom\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.946512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-combined-ca-bundle\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.946615 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blrsc\" (UniqueName: \"kubernetes.io/projected/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-kube-api-access-blrsc\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.946797 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-combined-ca-bundle\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.946829 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.946961 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data-custom\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.947105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:16 crc kubenswrapper[4957]: I1128 21:12:16.990701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnbv6\" (UniqueName: \"kubernetes.io/projected/63f3217b-0b9a-4621-abe7-6e1e90b01f35-kube-api-access-jnbv6\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.000060 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data-custom\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.002572 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.003942 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-combined-ca-bundle\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.031058 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.099353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.099399 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnbv6\" (UniqueName: \"kubernetes.io/projected/63f3217b-0b9a-4621-abe7-6e1e90b01f35-kube-api-access-jnbv6\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.100758 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data-custom\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.101424 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-combined-ca-bundle\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.107282 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-combined-ca-bundle\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.137516 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnbv6\" (UniqueName: \"kubernetes.io/projected/63f3217b-0b9a-4621-abe7-6e1e90b01f35-kube-api-access-jnbv6\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.138184 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blrsc\" (UniqueName: \"kubernetes.io/projected/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-kube-api-access-blrsc\") pod \"heat-cfnapi-6c99bbb6f7-ntvsj\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.138343 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data-custom\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.139543 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data\") pod \"heat-api-6795cdbb7b-2qqp4\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.177927 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.258079 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.379301 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.406536 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcrs\" (UniqueName: \"kubernetes.io/projected/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-kube-api-access-rxcrs\") pod \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.406685 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-operator-scripts\") pod \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\" (UID: \"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.433667 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-kube-api-access-rxcrs" (OuterVolumeSpecName: "kube-api-access-rxcrs") pod "6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" (UID: "6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76"). InnerVolumeSpecName "kube-api-access-rxcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.435550 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" (UID: "6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.475683 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sz4nq" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.478303 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sz4nq" event={"ID":"6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76","Type":"ContainerDied","Data":"19400d5488e464e0138b5f860f3ccf0a641d500da582df05ac16f65e9b7c8a3f"} Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.478360 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19400d5488e464e0138b5f860f3ccf0a641d500da582df05ac16f65e9b7c8a3f" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.508944 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcrs\" (UniqueName: \"kubernetes.io/projected/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-kube-api-access-rxcrs\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.508964 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.671574 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.700260 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.703539 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.712444 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827261 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28598bcc-eeb8-4f16-a9f3-504804e6dd44-operator-scripts\") pod \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827338 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75tkj\" (UniqueName: \"kubernetes.io/projected/689e0a10-8474-4f07-b2c3-35cb230e2803-kube-api-access-75tkj\") pod \"689e0a10-8474-4f07-b2c3-35cb230e2803\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827384 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whgt5\" (UniqueName: \"kubernetes.io/projected/c26dceba-3629-4241-84b1-72015dff8552-kube-api-access-whgt5\") pod \"c26dceba-3629-4241-84b1-72015dff8552\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827456 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26dceba-3629-4241-84b1-72015dff8552-operator-scripts\") pod \"c26dceba-3629-4241-84b1-72015dff8552\" (UID: \"c26dceba-3629-4241-84b1-72015dff8552\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827480 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8xcb\" (UniqueName: \"kubernetes.io/projected/28598bcc-eeb8-4f16-a9f3-504804e6dd44-kube-api-access-w8xcb\") pod \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\" (UID: \"28598bcc-eeb8-4f16-a9f3-504804e6dd44\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827517 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-operator-scripts\") pod \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827639 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689e0a10-8474-4f07-b2c3-35cb230e2803-operator-scripts\") pod \"689e0a10-8474-4f07-b2c3-35cb230e2803\" (UID: \"689e0a10-8474-4f07-b2c3-35cb230e2803\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.827726 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2skws\" (UniqueName: \"kubernetes.io/projected/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-kube-api-access-2skws\") pod \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\" (UID: \"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7\") " Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.833740 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" (UID: "b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.833832 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28598bcc-eeb8-4f16-a9f3-504804e6dd44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28598bcc-eeb8-4f16-a9f3-504804e6dd44" (UID: "28598bcc-eeb8-4f16-a9f3-504804e6dd44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.835347 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/689e0a10-8474-4f07-b2c3-35cb230e2803-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "689e0a10-8474-4f07-b2c3-35cb230e2803" (UID: "689e0a10-8474-4f07-b2c3-35cb230e2803"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.835415 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26dceba-3629-4241-84b1-72015dff8552-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c26dceba-3629-4241-84b1-72015dff8552" (UID: "c26dceba-3629-4241-84b1-72015dff8552"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.846909 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-kube-api-access-2skws" (OuterVolumeSpecName: "kube-api-access-2skws") pod "b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" (UID: "b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7"). InnerVolumeSpecName "kube-api-access-2skws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.852936 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/689e0a10-8474-4f07-b2c3-35cb230e2803-kube-api-access-75tkj" (OuterVolumeSpecName: "kube-api-access-75tkj") pod "689e0a10-8474-4f07-b2c3-35cb230e2803" (UID: "689e0a10-8474-4f07-b2c3-35cb230e2803"). InnerVolumeSpecName "kube-api-access-75tkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.853638 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28598bcc-eeb8-4f16-a9f3-504804e6dd44-kube-api-access-w8xcb" (OuterVolumeSpecName: "kube-api-access-w8xcb") pod "28598bcc-eeb8-4f16-a9f3-504804e6dd44" (UID: "28598bcc-eeb8-4f16-a9f3-504804e6dd44"). InnerVolumeSpecName "kube-api-access-w8xcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.869349 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9c7ff8b44-mp2wj"] Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.886459 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26dceba-3629-4241-84b1-72015dff8552-kube-api-access-whgt5" (OuterVolumeSpecName: "kube-api-access-whgt5") pod "c26dceba-3629-4241-84b1-72015dff8552" (UID: "c26dceba-3629-4241-84b1-72015dff8552"). InnerVolumeSpecName "kube-api-access-whgt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931005 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931232 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689e0a10-8474-4f07-b2c3-35cb230e2803-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931242 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2skws\" (UniqueName: \"kubernetes.io/projected/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7-kube-api-access-2skws\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931252 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28598bcc-eeb8-4f16-a9f3-504804e6dd44-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931262 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75tkj\" (UniqueName: \"kubernetes.io/projected/689e0a10-8474-4f07-b2c3-35cb230e2803-kube-api-access-75tkj\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931270 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whgt5\" (UniqueName: \"kubernetes.io/projected/c26dceba-3629-4241-84b1-72015dff8552-kube-api-access-whgt5\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931279 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26dceba-3629-4241-84b1-72015dff8552-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:17 crc kubenswrapper[4957]: I1128 21:12:17.931289 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8xcb\" (UniqueName: \"kubernetes.io/projected/28598bcc-eeb8-4f16-a9f3-504804e6dd44-kube-api-access-w8xcb\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.302069 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-hxz5w"] Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.336096 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6795cdbb7b-2qqp4"] Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.514984 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28e1-account-create-update-r5nnk" event={"ID":"28598bcc-eeb8-4f16-a9f3-504804e6dd44","Type":"ContainerDied","Data":"90ea345ca793e7a4d588f26feb7a06158e8fb5b254b57f6bad1d88443cb5d780"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.515030 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ea345ca793e7a4d588f26feb7a06158e8fb5b254b57f6bad1d88443cb5d780" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.515083 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28e1-account-create-update-r5nnk" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.521073 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"34d417c8-e8c3-491b-84e9-0db9f9a10038","Type":"ContainerStarted","Data":"e241487d5e9bdfc2d30e754d04a6a1ede92a5a5b826bb29bea8b2e712606aa6a"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.552071 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" event={"ID":"689e0a10-8474-4f07-b2c3-35cb230e2803","Type":"ContainerDied","Data":"cf392052458ee60351b70bfd8595ac7dcde4f9489732f56ed744256f7d2c11a7"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.552122 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf392052458ee60351b70bfd8595ac7dcde4f9489732f56ed744256f7d2c11a7" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.552195 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9e02-account-create-update-b59kd" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.564036 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.564019715 podStartE2EDuration="4.564019715s" podCreationTimestamp="2025-11-28 21:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:18.554147072 +0000 UTC m=+1378.022795001" watchObservedRunningTime="2025-11-28 21:12:18.564019715 +0000 UTC m=+1378.032667624" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.586546 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c7ff8b44-mp2wj" event={"ID":"2b5d38cb-30e7-40d2-9c78-a882bd723332","Type":"ContainerStarted","Data":"6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.586588 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c7ff8b44-mp2wj" event={"ID":"2b5d38cb-30e7-40d2-9c78-a882bd723332","Type":"ContainerStarted","Data":"91eb025c33b07833c266a50bfa6bcdbd322a0c9ff467dcd400e805d565983ca1"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.586915 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.596741 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" event={"ID":"a57e309f-cfbf-47ac-8f73-9276f89ce36b","Type":"ContainerStarted","Data":"a82fefd1eba2882c7dc6d87bb6a6709dd019737b422e39b1824e26a73d96ac83"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.610378 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5199-account-create-update-clxd9" event={"ID":"c26dceba-3629-4241-84b1-72015dff8552","Type":"ContainerDied","Data":"5c5d30d030dfeacb644d21fadc2a88ebaeae3207728120427461cb29c1bb8d14"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.610599 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c5d30d030dfeacb644d21fadc2a88ebaeae3207728120427461cb29c1bb8d14" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.610749 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5199-account-create-update-clxd9" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.617027 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-9c7ff8b44-mp2wj" podStartSLOduration=2.617004877 podStartE2EDuration="2.617004877s" podCreationTimestamp="2025-11-28 21:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:18.611451771 +0000 UTC m=+1378.080099680" watchObservedRunningTime="2025-11-28 21:12:18.617004877 +0000 UTC m=+1378.085652786" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.621646 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5vbz" event={"ID":"b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7","Type":"ContainerDied","Data":"62d1bf80d033ad4c80ba636337dbdeaa42a521a419e12f3e3665b2b10e8df35e"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.621722 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62d1bf80d033ad4c80ba636337dbdeaa42a521a419e12f3e3665b2b10e8df35e" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.621779 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5vbz" Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.650951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6795cdbb7b-2qqp4" event={"ID":"63f3217b-0b9a-4621-abe7-6e1e90b01f35","Type":"ContainerStarted","Data":"d6ee45070bad37e507b6444641fe00eb83d17ed295449eb0543d110d681ed895"} Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.667252 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerStarted","Data":"5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e"} Nov 28 21:12:18 crc kubenswrapper[4957]: W1128 21:12:18.751725 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dcb7aec_fdd4_4acb_9e80_64086bfe64c4.slice/crio-29189d101aa68538916a398331854bbd707d85b2ba37da2be81c905da41c3a79 WatchSource:0}: Error finding container 29189d101aa68538916a398331854bbd707d85b2ba37da2be81c905da41c3a79: Status 404 returned error can't find the container with id 29189d101aa68538916a398331854bbd707d85b2ba37da2be81c905da41c3a79 Nov 28 21:12:18 crc kubenswrapper[4957]: I1128 21:12:18.760372 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6c99bbb6f7-ntvsj"] Nov 28 21:12:19 crc kubenswrapper[4957]: I1128 21:12:19.706857 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" event={"ID":"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4","Type":"ContainerStarted","Data":"29189d101aa68538916a398331854bbd707d85b2ba37da2be81c905da41c3a79"} Nov 28 21:12:19 crc kubenswrapper[4957]: I1128 21:12:19.713620 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerStarted","Data":"677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5"} Nov 28 21:12:19 crc kubenswrapper[4957]: I1128 21:12:19.716879 4957 generic.go:334] "Generic (PLEG): container finished" podID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerID="27e94fb683bfec5be446290e299b9961ab3f41f5b8d7fb7dc4b9e8946424b158" exitCode=0 Nov 28 21:12:19 crc kubenswrapper[4957]: I1128 21:12:19.716965 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" event={"ID":"a57e309f-cfbf-47ac-8f73-9276f89ce36b","Type":"ContainerDied","Data":"27e94fb683bfec5be446290e299b9961ab3f41f5b8d7fb7dc4b9e8946424b158"} Nov 28 21:12:20 crc kubenswrapper[4957]: I1128 21:12:20.735721 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" event={"ID":"a57e309f-cfbf-47ac-8f73-9276f89ce36b","Type":"ContainerStarted","Data":"f5a10e9b3f6da32803b771254b8146881ed1ff5a3313b0df3db2a7422d1e64f0"} Nov 28 21:12:20 crc kubenswrapper[4957]: I1128 21:12:20.736081 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:20 crc kubenswrapper[4957]: I1128 21:12:20.765665 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" podStartSLOduration=4.765636242 podStartE2EDuration="4.765636242s" podCreationTimestamp="2025-11-28 21:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:20.755761729 +0000 UTC m=+1380.224409638" watchObservedRunningTime="2025-11-28 21:12:20.765636242 +0000 UTC m=+1380.234284151" Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.759974 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerStarted","Data":"a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804"} Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.760632 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.762930 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6795cdbb7b-2qqp4" event={"ID":"63f3217b-0b9a-4621-abe7-6e1e90b01f35","Type":"ContainerStarted","Data":"9a623a2700407a2446b35c63b325e9b6de3d083a8deceec0fb21e0f32c1f339f"} Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.763333 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.764244 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" event={"ID":"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4","Type":"ContainerStarted","Data":"8539f40e8ead99acead4c3bf680e093bccb1cc08dffdfb2239cfb3bc94550e1c"} Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.764382 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.792162 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.155907279 podStartE2EDuration="9.792138205s" podCreationTimestamp="2025-11-28 21:12:13 +0000 UTC" firstStartedPulling="2025-11-28 21:12:14.579158806 +0000 UTC m=+1374.047806715" lastFinishedPulling="2025-11-28 21:12:20.215389732 +0000 UTC m=+1379.684037641" observedRunningTime="2025-11-28 21:12:22.779104594 +0000 UTC m=+1382.247752513" watchObservedRunningTime="2025-11-28 21:12:22.792138205 +0000 UTC m=+1382.260786114" Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.828977 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podStartSLOduration=3.595099269 podStartE2EDuration="6.82895683s" podCreationTimestamp="2025-11-28 21:12:16 +0000 UTC" firstStartedPulling="2025-11-28 21:12:18.768268327 +0000 UTC m=+1378.236916236" lastFinishedPulling="2025-11-28 21:12:22.002125888 +0000 UTC m=+1381.470773797" observedRunningTime="2025-11-28 21:12:22.799757702 +0000 UTC m=+1382.268405611" watchObservedRunningTime="2025-11-28 21:12:22.82895683 +0000 UTC m=+1382.297604739" Nov 28 21:12:22 crc kubenswrapper[4957]: I1128 21:12:22.842256 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6795cdbb7b-2qqp4" podStartSLOduration=3.178226588 podStartE2EDuration="6.842234967s" podCreationTimestamp="2025-11-28 21:12:16 +0000 UTC" firstStartedPulling="2025-11-28 21:12:18.334447029 +0000 UTC m=+1377.803094938" lastFinishedPulling="2025-11-28 21:12:21.998455408 +0000 UTC m=+1381.467103317" observedRunningTime="2025-11-28 21:12:22.816135305 +0000 UTC m=+1382.284783214" watchObservedRunningTime="2025-11-28 21:12:22.842234967 +0000 UTC m=+1382.310882876" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.373306 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dnz8n"] Nov 28 21:12:23 crc kubenswrapper[4957]: E1128 21:12:23.374032 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" containerName="mariadb-database-create" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374049 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" containerName="mariadb-database-create" Nov 28 21:12:23 crc kubenswrapper[4957]: E1128 21:12:23.374072 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="689e0a10-8474-4f07-b2c3-35cb230e2803" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374079 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="689e0a10-8474-4f07-b2c3-35cb230e2803" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: E1128 21:12:23.374099 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28598bcc-eeb8-4f16-a9f3-504804e6dd44" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374105 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="28598bcc-eeb8-4f16-a9f3-504804e6dd44" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: E1128 21:12:23.374128 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" containerName="mariadb-database-create" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374134 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" containerName="mariadb-database-create" Nov 28 21:12:23 crc kubenswrapper[4957]: E1128 21:12:23.374147 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26dceba-3629-4241-84b1-72015dff8552" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374155 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26dceba-3629-4241-84b1-72015dff8552" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374362 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" containerName="mariadb-database-create" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374375 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" containerName="mariadb-database-create" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374386 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="28598bcc-eeb8-4f16-a9f3-504804e6dd44" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374402 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26dceba-3629-4241-84b1-72015dff8552" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.374414 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="689e0a10-8474-4f07-b2c3-35cb230e2803" containerName="mariadb-account-create-update" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.375170 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.381070 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.381497 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.381626 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rlz2c" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.386511 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dnz8n"] Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.540500 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmw8v\" (UniqueName: \"kubernetes.io/projected/efb8e30f-337f-4de0-8508-486479b41e97-kube-api-access-vmw8v\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.540855 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-scripts\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.540979 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.541069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-config-data\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.643234 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmw8v\" (UniqueName: \"kubernetes.io/projected/efb8e30f-337f-4de0-8508-486479b41e97-kube-api-access-vmw8v\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.643572 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-scripts\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.643699 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.643791 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-config-data\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.650149 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-config-data\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.650799 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-scripts\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.650955 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.664964 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmw8v\" (UniqueName: \"kubernetes.io/projected/efb8e30f-337f-4de0-8508-486479b41e97-kube-api-access-vmw8v\") pod \"nova-cell0-conductor-db-sync-dnz8n\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:23 crc kubenswrapper[4957]: I1128 21:12:23.696420 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.243451 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dnz8n"] Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.638464 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.638814 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.677850 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.685040 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.793906 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" event={"ID":"efb8e30f-337f-4de0-8508-486479b41e97","Type":"ContainerStarted","Data":"fca2f043191946cc477ff02790e61e601f73ac6a880f4e54fe86d92a1ed4d5f4"} Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.794727 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:24 crc kubenswrapper[4957]: I1128 21:12:24.794759 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.020810 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.022601 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-central-agent" containerID="cri-o://5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5" gracePeriod=30 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.022684 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="proxy-httpd" containerID="cri-o://a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804" gracePeriod=30 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.022746 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-notification-agent" containerID="cri-o://5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e" gracePeriod=30 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.022697 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="sg-core" containerID="cri-o://677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5" gracePeriod=30 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.597565 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-565895bd86-z2gdh"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.599434 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.614414 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-78dd4f676d-5jrq4"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.616030 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.644700 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-565895bd86-z2gdh"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.657511 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-78dd4f676d-5jrq4"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.698136 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwc4\" (UniqueName: \"kubernetes.io/projected/90b02279-5906-420f-9def-af822c4a6ff3-kube-api-access-ggwc4\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.698515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm5b8\" (UniqueName: \"kubernetes.io/projected/7973a31a-9f1b-4f08-a628-b739b15e2a6d-kube-api-access-dm5b8\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.698640 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-combined-ca-bundle\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.698861 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data-custom\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.699068 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.699246 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.699461 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data-custom\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.699675 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.710980 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7ccdb8cd88-jgq8q"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.713133 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.733682 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7ccdb8cd88-jgq8q"] Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802031 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802097 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802156 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data-custom\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802191 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-combined-ca-bundle\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802305 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802422 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwc4\" (UniqueName: \"kubernetes.io/projected/90b02279-5906-420f-9def-af822c4a6ff3-kube-api-access-ggwc4\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802453 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data-custom\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802487 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm5b8\" (UniqueName: \"kubernetes.io/projected/7973a31a-9f1b-4f08-a628-b739b15e2a6d-kube-api-access-dm5b8\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802507 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802524 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-combined-ca-bundle\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802550 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km25f\" (UniqueName: \"kubernetes.io/projected/874e2658-401a-467f-bce4-5ad01f6c393c-kube-api-access-km25f\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.802570 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data-custom\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.809420 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-combined-ca-bundle\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.812966 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.813044 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data-custom\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.813452 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.816374 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data-custom\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.820111 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.825324 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwc4\" (UniqueName: \"kubernetes.io/projected/90b02279-5906-420f-9def-af822c4a6ff3-kube-api-access-ggwc4\") pod \"heat-cfnapi-78dd4f676d-5jrq4\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.825662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm5b8\" (UniqueName: \"kubernetes.io/projected/7973a31a-9f1b-4f08-a628-b739b15e2a6d-kube-api-access-dm5b8\") pod \"heat-engine-565895bd86-z2gdh\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.833562 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.833607 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.842397 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerID="a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804" exitCode=0 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.842889 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerID="677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5" exitCode=2 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.842908 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerID="5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e" exitCode=0 Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.842562 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerDied","Data":"a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804"} Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.844747 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerDied","Data":"677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5"} Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.844763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerDied","Data":"5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e"} Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.890524 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.906193 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-combined-ca-bundle\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.906658 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data-custom\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.906730 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.906798 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km25f\" (UniqueName: \"kubernetes.io/projected/874e2658-401a-467f-bce4-5ad01f6c393c-kube-api-access-km25f\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.906920 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.927730 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data-custom\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.930548 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.931306 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-combined-ca-bundle\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.931814 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km25f\" (UniqueName: \"kubernetes.io/projected/874e2658-401a-467f-bce4-5ad01f6c393c-kube-api-access-km25f\") pod \"heat-api-7ccdb8cd88-jgq8q\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.966165 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:25 crc kubenswrapper[4957]: I1128 21:12:25.973896 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.043580 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.816230 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-78dd4f676d-5jrq4"] Nov 28 21:12:26 crc kubenswrapper[4957]: W1128 21:12:26.823113 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90b02279_5906_420f_9def_af822c4a6ff3.slice/crio-a2116ae3f37eaeada0f7e9a28e25c33260e3af8ac9faf6b682b05028a72377ed WatchSource:0}: Error finding container a2116ae3f37eaeada0f7e9a28e25c33260e3af8ac9faf6b682b05028a72377ed: Status 404 returned error can't find the container with id a2116ae3f37eaeada0f7e9a28e25c33260e3af8ac9faf6b682b05028a72377ed Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.896251 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" event={"ID":"90b02279-5906-420f-9def-af822c4a6ff3","Type":"ContainerStarted","Data":"a2116ae3f37eaeada0f7e9a28e25c33260e3af8ac9faf6b682b05028a72377ed"} Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.896334 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.896356 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.955850 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7ccdb8cd88-jgq8q"] Nov 28 21:12:26 crc kubenswrapper[4957]: I1128 21:12:26.986285 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-565895bd86-z2gdh"] Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.033124 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.103167 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7gvzp"] Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.103396 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerName="dnsmasq-dns" containerID="cri-o://3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316" gracePeriod=10 Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.716866 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.796052 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-svc\") pod \"93c900fb-97c4-4e17-ae10-873f8d8378f7\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.796533 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-config\") pod \"93c900fb-97c4-4e17-ae10-873f8d8378f7\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.796617 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-nb\") pod \"93c900fb-97c4-4e17-ae10-873f8d8378f7\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.796660 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqd64\" (UniqueName: \"kubernetes.io/projected/93c900fb-97c4-4e17-ae10-873f8d8378f7-kube-api-access-lqd64\") pod \"93c900fb-97c4-4e17-ae10-873f8d8378f7\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.796729 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-sb\") pod \"93c900fb-97c4-4e17-ae10-873f8d8378f7\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.796768 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-swift-storage-0\") pod \"93c900fb-97c4-4e17-ae10-873f8d8378f7\" (UID: \"93c900fb-97c4-4e17-ae10-873f8d8378f7\") " Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.821123 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c900fb-97c4-4e17-ae10-873f8d8378f7-kube-api-access-lqd64" (OuterVolumeSpecName: "kube-api-access-lqd64") pod "93c900fb-97c4-4e17-ae10-873f8d8378f7" (UID: "93c900fb-97c4-4e17-ae10-873f8d8378f7"). InnerVolumeSpecName "kube-api-access-lqd64". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.902009 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqd64\" (UniqueName: \"kubernetes.io/projected/93c900fb-97c4-4e17-ae10-873f8d8378f7-kube-api-access-lqd64\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.939972 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93c900fb-97c4-4e17-ae10-873f8d8378f7" (UID: "93c900fb-97c4-4e17-ae10-873f8d8378f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.943016 4957 generic.go:334] "Generic (PLEG): container finished" podID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerID="3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316" exitCode=0 Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.943076 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" event={"ID":"93c900fb-97c4-4e17-ae10-873f8d8378f7","Type":"ContainerDied","Data":"3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316"} Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.943106 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" event={"ID":"93c900fb-97c4-4e17-ae10-873f8d8378f7","Type":"ContainerDied","Data":"fe6f8c0a0d08b0599ce9ba3bdb0ae58695d9f8c626545970bddf1291e5e8405a"} Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.943123 4957 scope.go:117] "RemoveContainer" containerID="3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.943265 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7gvzp" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.964420 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93c900fb-97c4-4e17-ae10-873f8d8378f7" (UID: "93c900fb-97c4-4e17-ae10-873f8d8378f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.974477 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93c900fb-97c4-4e17-ae10-873f8d8378f7" (UID: "93c900fb-97c4-4e17-ae10-873f8d8378f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.974983 4957 generic.go:334] "Generic (PLEG): container finished" podID="90b02279-5906-420f-9def-af822c4a6ff3" containerID="3334194f814d09cab3c250a6fdfdf5cefc5254ffa477ef7a11605c8279d08a0e" exitCode=1 Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.975064 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" event={"ID":"90b02279-5906-420f-9def-af822c4a6ff3","Type":"ContainerDied","Data":"3334194f814d09cab3c250a6fdfdf5cefc5254ffa477ef7a11605c8279d08a0e"} Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.975755 4957 scope.go:117] "RemoveContainer" containerID="3334194f814d09cab3c250a6fdfdf5cefc5254ffa477ef7a11605c8279d08a0e" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.979845 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-config" (OuterVolumeSpecName: "config") pod "93c900fb-97c4-4e17-ae10-873f8d8378f7" (UID: "93c900fb-97c4-4e17-ae10-873f8d8378f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.995750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccdb8cd88-jgq8q" event={"ID":"874e2658-401a-467f-bce4-5ad01f6c393c","Type":"ContainerStarted","Data":"b8f393ad8c98e7987a99625434c89cea114e8511cc6827bf19d1e96e57ea8fda"} Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.995801 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccdb8cd88-jgq8q" event={"ID":"874e2658-401a-467f-bce4-5ad01f6c393c","Type":"ContainerStarted","Data":"02e6edffb92b3393dfb4a97c799b9b5c8d74718fec7d7f5d80dbdf103f4cb2d8"} Nov 28 21:12:27 crc kubenswrapper[4957]: I1128 21:12:27.997043 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.004683 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.004710 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.004720 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.004728 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.006686 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93c900fb-97c4-4e17-ae10-873f8d8378f7" (UID: "93c900fb-97c4-4e17-ae10-873f8d8378f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.033732 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-565895bd86-z2gdh" event={"ID":"7973a31a-9f1b-4f08-a628-b739b15e2a6d","Type":"ContainerStarted","Data":"ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3"} Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.033770 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-565895bd86-z2gdh" event={"ID":"7973a31a-9f1b-4f08-a628-b739b15e2a6d","Type":"ContainerStarted","Data":"5917273e458286a36a4a0c2cbf7f3612c6df28ba192da02addb5ca2a23f12625"} Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.033804 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.057725 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7ccdb8cd88-jgq8q" podStartSLOduration=3.057701995 podStartE2EDuration="3.057701995s" podCreationTimestamp="2025-11-28 21:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:28.032050254 +0000 UTC m=+1387.500698173" watchObservedRunningTime="2025-11-28 21:12:28.057701995 +0000 UTC m=+1387.526349904" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.082083 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-565895bd86-z2gdh" podStartSLOduration=3.082060614 podStartE2EDuration="3.082060614s" podCreationTimestamp="2025-11-28 21:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:28.072724884 +0000 UTC m=+1387.541372793" watchObservedRunningTime="2025-11-28 21:12:28.082060614 +0000 UTC m=+1387.550708523" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.083942 4957 scope.go:117] "RemoveContainer" containerID="2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.107116 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c900fb-97c4-4e17-ae10-873f8d8378f7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.264885 4957 scope.go:117] "RemoveContainer" containerID="3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316" Nov 28 21:12:28 crc kubenswrapper[4957]: E1128 21:12:28.270155 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316\": container with ID starting with 3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316 not found: ID does not exist" containerID="3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.270196 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316"} err="failed to get container status \"3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316\": rpc error: code = NotFound desc = could not find container \"3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316\": container with ID starting with 3875b3b3dfc0dec8948874d64ea7bab08ee3e82094dc1640dedcd77d4da8d316 not found: ID does not exist" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.270232 4957 scope.go:117] "RemoveContainer" containerID="2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25" Nov 28 21:12:28 crc kubenswrapper[4957]: E1128 21:12:28.282448 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25\": container with ID starting with 2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25 not found: ID does not exist" containerID="2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.282499 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25"} err="failed to get container status \"2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25\": rpc error: code = NotFound desc = could not find container \"2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25\": container with ID starting with 2f9e53a3cbb78270c94547201fa4e727e27a0e16200d4eb1739496e0bc539b25 not found: ID does not exist" Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.347274 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7gvzp"] Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.357750 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7gvzp"] Nov 28 21:12:28 crc kubenswrapper[4957]: I1128 21:12:28.829802 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" path="/var/lib/kubelet/pods/93c900fb-97c4-4e17-ae10-873f8d8378f7/volumes" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.080849 4957 generic.go:334] "Generic (PLEG): container finished" podID="90b02279-5906-420f-9def-af822c4a6ff3" containerID="27be004b20a45e1a4b977cdb69be6fa1f2b85a75925c9c8aef7a3d9e47165226" exitCode=1 Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.080952 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" event={"ID":"90b02279-5906-420f-9def-af822c4a6ff3","Type":"ContainerDied","Data":"27be004b20a45e1a4b977cdb69be6fa1f2b85a75925c9c8aef7a3d9e47165226"} Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.080994 4957 scope.go:117] "RemoveContainer" containerID="3334194f814d09cab3c250a6fdfdf5cefc5254ffa477ef7a11605c8279d08a0e" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.081779 4957 scope.go:117] "RemoveContainer" containerID="27be004b20a45e1a4b977cdb69be6fa1f2b85a75925c9c8aef7a3d9e47165226" Nov 28 21:12:29 crc kubenswrapper[4957]: E1128 21:12:29.082140 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4f676d-5jrq4_openstack(90b02279-5906-420f-9def-af822c4a6ff3)\"" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" podUID="90b02279-5906-420f-9def-af822c4a6ff3" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.089779 4957 generic.go:334] "Generic (PLEG): container finished" podID="874e2658-401a-467f-bce4-5ad01f6c393c" containerID="b8f393ad8c98e7987a99625434c89cea114e8511cc6827bf19d1e96e57ea8fda" exitCode=1 Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.090919 4957 scope.go:117] "RemoveContainer" containerID="b8f393ad8c98e7987a99625434c89cea114e8511cc6827bf19d1e96e57ea8fda" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.091193 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccdb8cd88-jgq8q" event={"ID":"874e2658-401a-467f-bce4-5ad01f6c393c","Type":"ContainerDied","Data":"b8f393ad8c98e7987a99625434c89cea114e8511cc6827bf19d1e96e57ea8fda"} Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.582882 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c99bbb6f7-ntvsj"] Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.583370 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" containerID="cri-o://8539f40e8ead99acead4c3bf680e093bccb1cc08dffdfb2239cfb3bc94550e1c" gracePeriod=60 Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.600698 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6795cdbb7b-2qqp4"] Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.600933 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6795cdbb7b-2qqp4" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" containerID="cri-o://9a623a2700407a2446b35c63b325e9b6de3d083a8deceec0fb21e0f32c1f339f" gracePeriod=60 Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.606179 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": EOF" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.606587 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": EOF" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.613340 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6795cdbb7b-2qqp4" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": EOF" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.613619 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6795cdbb7b-2qqp4" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": EOF" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.623334 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8445cd679c-6kwss"] Nov 28 21:12:29 crc kubenswrapper[4957]: E1128 21:12:29.624880 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerName="dnsmasq-dns" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.624902 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerName="dnsmasq-dns" Nov 28 21:12:29 crc kubenswrapper[4957]: E1128 21:12:29.624936 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerName="init" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.624945 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerName="init" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.626255 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c900fb-97c4-4e17-ae10-873f8d8378f7" containerName="dnsmasq-dns" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.648736 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.654286 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.673131 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.715267 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8445cd679c-6kwss"] Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.754850 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5f4944777d-4svqx"] Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.757662 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.759520 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-public-tls-certs\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.759567 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-combined-ca-bundle\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.759588 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data-custom\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.759749 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c659\" (UniqueName: \"kubernetes.io/projected/bb0b8e5f-611a-452d-9f0e-229f445c77d6-kube-api-access-9c659\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.759767 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.759803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-internal-tls-certs\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.762663 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.762850 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.771114 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f4944777d-4svqx"] Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862108 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c659\" (UniqueName: \"kubernetes.io/projected/bb0b8e5f-611a-452d-9f0e-229f445c77d6-kube-api-access-9c659\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862157 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862194 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-internal-tls-certs\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862230 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data-custom\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862264 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9p2k\" (UniqueName: \"kubernetes.io/projected/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-kube-api-access-r9p2k\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862313 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862337 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-public-tls-certs\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.862370 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-combined-ca-bundle\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.863191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data-custom\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.863309 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-internal-tls-certs\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.863440 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-public-tls-certs\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.863579 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-combined-ca-bundle\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.872446 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data-custom\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.879230 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-public-tls-certs\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.886549 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.888280 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-combined-ca-bundle\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.889564 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c659\" (UniqueName: \"kubernetes.io/projected/bb0b8e5f-611a-452d-9f0e-229f445c77d6-kube-api-access-9c659\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.889625 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-internal-tls-certs\") pod \"heat-api-8445cd679c-6kwss\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.969515 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.969608 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-internal-tls-certs\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.969676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-public-tls-certs\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.969737 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-combined-ca-bundle\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.969849 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data-custom\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.969889 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9p2k\" (UniqueName: \"kubernetes.io/projected/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-kube-api-access-r9p2k\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.977088 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-combined-ca-bundle\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.977876 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-internal-tls-certs\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.977897 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data-custom\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.981378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.989120 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-public-tls-certs\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.996844 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:29 crc kubenswrapper[4957]: I1128 21:12:29.998158 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9p2k\" (UniqueName: \"kubernetes.io/projected/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-kube-api-access-r9p2k\") pod \"heat-cfnapi-5f4944777d-4svqx\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.086765 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.086886 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.090250 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.101395 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.109146 4957 scope.go:117] "RemoveContainer" containerID="27be004b20a45e1a4b977cdb69be6fa1f2b85a75925c9c8aef7a3d9e47165226" Nov 28 21:12:30 crc kubenswrapper[4957]: E1128 21:12:30.109392 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4f676d-5jrq4_openstack(90b02279-5906-420f-9def-af822c4a6ff3)\"" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" podUID="90b02279-5906-420f-9def-af822c4a6ff3" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.118951 4957 generic.go:334] "Generic (PLEG): container finished" podID="874e2658-401a-467f-bce4-5ad01f6c393c" containerID="8cbaebf7acec6f66c554a987ddab899620dd0aacd7c9d83ea39df45fd53bb69d" exitCode=1 Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.119003 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccdb8cd88-jgq8q" event={"ID":"874e2658-401a-467f-bce4-5ad01f6c393c","Type":"ContainerDied","Data":"8cbaebf7acec6f66c554a987ddab899620dd0aacd7c9d83ea39df45fd53bb69d"} Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.119053 4957 scope.go:117] "RemoveContainer" containerID="b8f393ad8c98e7987a99625434c89cea114e8511cc6827bf19d1e96e57ea8fda" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.120424 4957 scope.go:117] "RemoveContainer" containerID="8cbaebf7acec6f66c554a987ddab899620dd0aacd7c9d83ea39df45fd53bb69d" Nov 28 21:12:30 crc kubenswrapper[4957]: E1128 21:12:30.120897 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7ccdb8cd88-jgq8q_openstack(874e2658-401a-467f-bce4-5ad01f6c393c)\"" pod="openstack/heat-api-7ccdb8cd88-jgq8q" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.876864 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8445cd679c-6kwss"] Nov 28 21:12:30 crc kubenswrapper[4957]: W1128 21:12:30.923866 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b5b68f_4c6d_4002_a784_5f8e85470f5f.slice/crio-3f042530c7ef3952daba8695f3cde5724dc8ccc90e3a0b6f2568ecb6f272f8d9 WatchSource:0}: Error finding container 3f042530c7ef3952daba8695f3cde5724dc8ccc90e3a0b6f2568ecb6f272f8d9: Status 404 returned error can't find the container with id 3f042530c7ef3952daba8695f3cde5724dc8ccc90e3a0b6f2568ecb6f272f8d9 Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.925416 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f4944777d-4svqx"] Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.975132 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:30 crc kubenswrapper[4957]: I1128 21:12:30.975180 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.051074 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.051115 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.084338 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.146517 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerID="5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5" exitCode=0 Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.146577 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerDied","Data":"5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5"} Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.146604 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ed70786-2196-4a33-9b79-b9cc16ee171a","Type":"ContainerDied","Data":"c2d80b252a90297e56b66ca37b389f7c4009559d97f1619bbab59186f26c09e1"} Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.146623 4957 scope.go:117] "RemoveContainer" containerID="a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.146738 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.151611 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f4944777d-4svqx" event={"ID":"b7b5b68f-4c6d-4002-a784-5f8e85470f5f","Type":"ContainerStarted","Data":"3f042530c7ef3952daba8695f3cde5724dc8ccc90e3a0b6f2568ecb6f272f8d9"} Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.157459 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8445cd679c-6kwss" event={"ID":"bb0b8e5f-611a-452d-9f0e-229f445c77d6","Type":"ContainerStarted","Data":"70b7b5f35bbcd483bba86ff7b332d405cd3b375872c4877fff48f95794cdcfe5"} Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.178174 4957 scope.go:117] "RemoveContainer" containerID="8cbaebf7acec6f66c554a987ddab899620dd0aacd7c9d83ea39df45fd53bb69d" Nov 28 21:12:31 crc kubenswrapper[4957]: E1128 21:12:31.178524 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7ccdb8cd88-jgq8q_openstack(874e2658-401a-467f-bce4-5ad01f6c393c)\"" pod="openstack/heat-api-7ccdb8cd88-jgq8q" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.187721 4957 scope.go:117] "RemoveContainer" containerID="677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.191483 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.191614 4957 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.191865 4957 scope.go:117] "RemoveContainer" containerID="27be004b20a45e1a4b977cdb69be6fa1f2b85a75925c9c8aef7a3d9e47165226" Nov 28 21:12:31 crc kubenswrapper[4957]: E1128 21:12:31.192129 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4f676d-5jrq4_openstack(90b02279-5906-420f-9def-af822c4a6ff3)\"" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" podUID="90b02279-5906-420f-9def-af822c4a6ff3" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.203281 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.206701 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-sg-core-conf-yaml\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.207825 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-run-httpd\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.208027 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prlfm\" (UniqueName: \"kubernetes.io/projected/9ed70786-2196-4a33-9b79-b9cc16ee171a-kube-api-access-prlfm\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.208155 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-config-data\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.208326 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-scripts\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.208438 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-combined-ca-bundle\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.208568 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-log-httpd\") pod \"9ed70786-2196-4a33-9b79-b9cc16ee171a\" (UID: \"9ed70786-2196-4a33-9b79-b9cc16ee171a\") " Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.213593 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.215252 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.223758 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-scripts" (OuterVolumeSpecName: "scripts") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.233718 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed70786-2196-4a33-9b79-b9cc16ee171a-kube-api-access-prlfm" (OuterVolumeSpecName: "kube-api-access-prlfm") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "kube-api-access-prlfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.311499 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.311533 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prlfm\" (UniqueName: \"kubernetes.io/projected/9ed70786-2196-4a33-9b79-b9cc16ee171a-kube-api-access-prlfm\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.311543 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.311551 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ed70786-2196-4a33-9b79-b9cc16ee171a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.407567 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.415861 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.669366 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.730603 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.773348 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-config-data" (OuterVolumeSpecName: "config-data") pod "9ed70786-2196-4a33-9b79-b9cc16ee171a" (UID: "9ed70786-2196-4a33-9b79-b9cc16ee171a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.778545 4957 scope.go:117] "RemoveContainer" containerID="5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.826329 4957 scope.go:117] "RemoveContainer" containerID="5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.835992 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed70786-2196-4a33-9b79-b9cc16ee171a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.866464 4957 scope.go:117] "RemoveContainer" containerID="a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804" Nov 28 21:12:31 crc kubenswrapper[4957]: E1128 21:12:31.880613 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804\": container with ID starting with a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804 not found: ID does not exist" containerID="a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.880690 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804"} err="failed to get container status \"a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804\": rpc error: code = NotFound desc = could not find container \"a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804\": container with ID starting with a910d52fe67a521fa74103a421c03c02118f46e89cfdc552d390c5d6eb2a6804 not found: ID does not exist" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.880715 4957 scope.go:117] "RemoveContainer" containerID="677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5" Nov 28 21:12:31 crc kubenswrapper[4957]: E1128 21:12:31.881675 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5\": container with ID starting with 677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5 not found: ID does not exist" containerID="677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.881695 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5"} err="failed to get container status \"677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5\": rpc error: code = NotFound desc = could not find container \"677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5\": container with ID starting with 677a88a62e5c47d40337ee676a3907933f47e9722380acf4866f414551efe6d5 not found: ID does not exist" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.881708 4957 scope.go:117] "RemoveContainer" containerID="5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e" Nov 28 21:12:31 crc kubenswrapper[4957]: E1128 21:12:31.882896 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e\": container with ID starting with 5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e not found: ID does not exist" containerID="5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.882917 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e"} err="failed to get container status \"5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e\": rpc error: code = NotFound desc = could not find container \"5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e\": container with ID starting with 5d8304d4165c423ff93a71d2b7544ef12a39ce4ff43d5cfb65ecb3369673893e not found: ID does not exist" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.882934 4957 scope.go:117] "RemoveContainer" containerID="5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5" Nov 28 21:12:31 crc kubenswrapper[4957]: E1128 21:12:31.883502 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5\": container with ID starting with 5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5 not found: ID does not exist" containerID="5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5" Nov 28 21:12:31 crc kubenswrapper[4957]: I1128 21:12:31.883529 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5"} err="failed to get container status \"5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5\": rpc error: code = NotFound desc = could not find container \"5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5\": container with ID starting with 5a5938f144bdb32839c864ec50e9a43f4f280467f70650ecfdf11855664f01a5 not found: ID does not exist" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.088169 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.104890 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.161283 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:32 crc kubenswrapper[4957]: E1128 21:12:32.161785 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-notification-agent" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.161797 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-notification-agent" Nov 28 21:12:32 crc kubenswrapper[4957]: E1128 21:12:32.161817 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-central-agent" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.161823 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-central-agent" Nov 28 21:12:32 crc kubenswrapper[4957]: E1128 21:12:32.161838 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="proxy-httpd" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.161846 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="proxy-httpd" Nov 28 21:12:32 crc kubenswrapper[4957]: E1128 21:12:32.161868 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="sg-core" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.161873 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="sg-core" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.162141 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-notification-agent" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.162163 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="proxy-httpd" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.162185 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="sg-core" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.162198 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" containerName="ceilometer-central-agent" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.164405 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.175840 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.176675 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.205235 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.230892 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f4944777d-4svqx" event={"ID":"b7b5b68f-4c6d-4002-a784-5f8e85470f5f","Type":"ContainerStarted","Data":"74ced27fc87051adc01aec7d44561800c7b4a5356f33699a5af97194890c82eb"} Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.232223 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.237188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8445cd679c-6kwss" event={"ID":"bb0b8e5f-611a-452d-9f0e-229f445c77d6","Type":"ContainerStarted","Data":"9e5543fca250dfea7b3fb960cb4ae737ef83d43ccf1b3b1e998e737df1a1d148"} Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.237243 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.237536 4957 scope.go:117] "RemoveContainer" containerID="8cbaebf7acec6f66c554a987ddab899620dd0aacd7c9d83ea39df45fd53bb69d" Nov 28 21:12:32 crc kubenswrapper[4957]: E1128 21:12:32.237714 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7ccdb8cd88-jgq8q_openstack(874e2658-401a-467f-bce4-5ad01f6c393c)\"" pod="openstack/heat-api-7ccdb8cd88-jgq8q" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.288010 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5f4944777d-4svqx" podStartSLOduration=3.287988139 podStartE2EDuration="3.287988139s" podCreationTimestamp="2025-11-28 21:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:32.247611236 +0000 UTC m=+1391.716259145" watchObservedRunningTime="2025-11-28 21:12:32.287988139 +0000 UTC m=+1391.756636048" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.294245 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-8445cd679c-6kwss" podStartSLOduration=3.294226642 podStartE2EDuration="3.294226642s" podCreationTimestamp="2025-11-28 21:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:32.266722266 +0000 UTC m=+1391.735370175" watchObservedRunningTime="2025-11-28 21:12:32.294226642 +0000 UTC m=+1391.762874551" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.352818 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.353093 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-config-data\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.353267 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4nz\" (UniqueName: \"kubernetes.io/projected/bb332158-846f-4259-8bbb-fa93e3271e65-kube-api-access-wb4nz\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.354396 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-scripts\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.354445 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-log-httpd\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.354557 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-run-httpd\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.355096 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.458116 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4nz\" (UniqueName: \"kubernetes.io/projected/bb332158-846f-4259-8bbb-fa93e3271e65-kube-api-access-wb4nz\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.458325 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-log-httpd\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.458937 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-log-httpd\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.458988 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-scripts\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.459040 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-run-httpd\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.459367 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-run-httpd\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.459459 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.459873 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.459971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-config-data\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.471784 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.474916 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-scripts\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.478413 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-config-data\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.496900 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.497240 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4nz\" (UniqueName: \"kubernetes.io/projected/bb332158-846f-4259-8bbb-fa93e3271e65-kube-api-access-wb4nz\") pod \"ceilometer-0\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.563773 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:32 crc kubenswrapper[4957]: I1128 21:12:32.943184 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed70786-2196-4a33-9b79-b9cc16ee171a" path="/var/lib/kubelet/pods/9ed70786-2196-4a33-9b79-b9cc16ee171a/volumes" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.061317 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6795cdbb7b-2qqp4" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": read tcp 10.217.0.2:42680->10.217.0.216:8004: read: connection reset by peer" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.062364 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6795cdbb7b-2qqp4" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": dial tcp 10.217.0.216:8004: connect: connection refused" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.281639 4957 generic.go:334] "Generic (PLEG): container finished" podID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerID="9a623a2700407a2446b35c63b325e9b6de3d083a8deceec0fb21e0f32c1f339f" exitCode=0 Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.281713 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6795cdbb7b-2qqp4" event={"ID":"63f3217b-0b9a-4621-abe7-6e1e90b01f35","Type":"ContainerDied","Data":"9a623a2700407a2446b35c63b325e9b6de3d083a8deceec0fb21e0f32c1f339f"} Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.440710 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:33 crc kubenswrapper[4957]: W1128 21:12:33.485424 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb332158_846f_4259_8bbb_fa93e3271e65.slice/crio-f7c9641f4cddf1fd5d5f9eb6fbfc006428a2f0a11fe1bbf338c41f2a71e6f1ea WatchSource:0}: Error finding container f7c9641f4cddf1fd5d5f9eb6fbfc006428a2f0a11fe1bbf338c41f2a71e6f1ea: Status 404 returned error can't find the container with id f7c9641f4cddf1fd5d5f9eb6fbfc006428a2f0a11fe1bbf338c41f2a71e6f1ea Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.702891 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.720633 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnbv6\" (UniqueName: \"kubernetes.io/projected/63f3217b-0b9a-4621-abe7-6e1e90b01f35-kube-api-access-jnbv6\") pod \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.720719 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data\") pod \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.720958 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-combined-ca-bundle\") pod \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.721076 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data-custom\") pod \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\" (UID: \"63f3217b-0b9a-4621-abe7-6e1e90b01f35\") " Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.740662 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "63f3217b-0b9a-4621-abe7-6e1e90b01f35" (UID: "63f3217b-0b9a-4621-abe7-6e1e90b01f35"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.754632 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f3217b-0b9a-4621-abe7-6e1e90b01f35-kube-api-access-jnbv6" (OuterVolumeSpecName: "kube-api-access-jnbv6") pod "63f3217b-0b9a-4621-abe7-6e1e90b01f35" (UID: "63f3217b-0b9a-4621-abe7-6e1e90b01f35"). InnerVolumeSpecName "kube-api-access-jnbv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.770109 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63f3217b-0b9a-4621-abe7-6e1e90b01f35" (UID: "63f3217b-0b9a-4621-abe7-6e1e90b01f35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.826756 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.827065 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnbv6\" (UniqueName: \"kubernetes.io/projected/63f3217b-0b9a-4621-abe7-6e1e90b01f35-kube-api-access-jnbv6\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.827078 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.854294 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data" (OuterVolumeSpecName: "config-data") pod "63f3217b-0b9a-4621-abe7-6e1e90b01f35" (UID: "63f3217b-0b9a-4621-abe7-6e1e90b01f35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:33 crc kubenswrapper[4957]: I1128 21:12:33.929160 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f3217b-0b9a-4621-abe7-6e1e90b01f35-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.297127 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6795cdbb7b-2qqp4" Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.297123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6795cdbb7b-2qqp4" event={"ID":"63f3217b-0b9a-4621-abe7-6e1e90b01f35","Type":"ContainerDied","Data":"d6ee45070bad37e507b6444641fe00eb83d17ed295449eb0543d110d681ed895"} Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.297295 4957 scope.go:117] "RemoveContainer" containerID="9a623a2700407a2446b35c63b325e9b6de3d083a8deceec0fb21e0f32c1f339f" Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.303197 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerStarted","Data":"69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa"} Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.303256 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerStarted","Data":"f7c9641f4cddf1fd5d5f9eb6fbfc006428a2f0a11fe1bbf338c41f2a71e6f1ea"} Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.341905 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6795cdbb7b-2qqp4"] Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.364990 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6795cdbb7b-2qqp4"] Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.551019 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": read tcp 10.217.0.2:40718->10.217.0.215:8000: read: connection reset by peer" Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.551906 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": dial tcp 10.217.0.215:8000: connect: connection refused" Nov 28 21:12:34 crc kubenswrapper[4957]: I1128 21:12:34.827079 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" path="/var/lib/kubelet/pods/63f3217b-0b9a-4621-abe7-6e1e90b01f35/volumes" Nov 28 21:12:35 crc kubenswrapper[4957]: I1128 21:12:35.315422 4957 generic.go:334] "Generic (PLEG): container finished" podID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerID="8539f40e8ead99acead4c3bf680e093bccb1cc08dffdfb2239cfb3bc94550e1c" exitCode=0 Nov 28 21:12:35 crc kubenswrapper[4957]: I1128 21:12:35.315471 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" event={"ID":"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4","Type":"ContainerDied","Data":"8539f40e8ead99acead4c3bf680e093bccb1cc08dffdfb2239cfb3bc94550e1c"} Nov 28 21:12:36 crc kubenswrapper[4957]: I1128 21:12:36.135888 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:36 crc kubenswrapper[4957]: I1128 21:12:36.883832 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:12:37 crc kubenswrapper[4957]: I1128 21:12:37.381183 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": dial tcp 10.217.0.215:8000: connect: connection refused" Nov 28 21:12:40 crc kubenswrapper[4957]: I1128 21:12:40.887719 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.030457 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data\") pod \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.030891 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-combined-ca-bundle\") pod \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.031490 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data-custom\") pod \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.031740 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blrsc\" (UniqueName: \"kubernetes.io/projected/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-kube-api-access-blrsc\") pod \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\" (UID: \"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4\") " Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.040358 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" (UID: "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.043495 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-kube-api-access-blrsc" (OuterVolumeSpecName: "kube-api-access-blrsc") pod "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" (UID: "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4"). InnerVolumeSpecName "kube-api-access-blrsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.072286 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" (UID: "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.106333 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data" (OuterVolumeSpecName: "config-data") pod "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" (UID: "5dcb7aec-fdd4-4acb-9e80-64086bfe64c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.136070 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.136112 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.136121 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blrsc\" (UniqueName: \"kubernetes.io/projected/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-kube-api-access-blrsc\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.136131 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.393054 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" event={"ID":"5dcb7aec-fdd4-4acb-9e80-64086bfe64c4","Type":"ContainerDied","Data":"29189d101aa68538916a398331854bbd707d85b2ba37da2be81c905da41c3a79"} Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.393147 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6c99bbb6f7-ntvsj" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.393370 4957 scope.go:117] "RemoveContainer" containerID="8539f40e8ead99acead4c3bf680e093bccb1cc08dffdfb2239cfb3bc94550e1c" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.402449 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerStarted","Data":"71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957"} Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.411541 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" event={"ID":"efb8e30f-337f-4de0-8508-486479b41e97","Type":"ContainerStarted","Data":"2d3bdeafa902ea5b5dda20833a9bc5656494f5f15558017bdf63d1f6c82b78ef"} Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.439901 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" podStartSLOduration=2.155235053 podStartE2EDuration="18.439876255s" podCreationTimestamp="2025-11-28 21:12:23 +0000 UTC" firstStartedPulling="2025-11-28 21:12:24.243732179 +0000 UTC m=+1383.712380088" lastFinishedPulling="2025-11-28 21:12:40.528373381 +0000 UTC m=+1399.997021290" observedRunningTime="2025-11-28 21:12:41.425428119 +0000 UTC m=+1400.894076028" watchObservedRunningTime="2025-11-28 21:12:41.439876255 +0000 UTC m=+1400.908524164" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.533261 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6c99bbb6f7-ntvsj"] Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.556434 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6c99bbb6f7-ntvsj"] Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.592601 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:12:41 crc kubenswrapper[4957]: I1128 21:12:41.696060 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ccdb8cd88-jgq8q"] Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.060478 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.176916 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-78dd4f676d-5jrq4"] Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.701651 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.798691 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data-custom\") pod \"874e2658-401a-467f-bce4-5ad01f6c393c\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.798745 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data\") pod \"874e2658-401a-467f-bce4-5ad01f6c393c\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.798782 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km25f\" (UniqueName: \"kubernetes.io/projected/874e2658-401a-467f-bce4-5ad01f6c393c-kube-api-access-km25f\") pod \"874e2658-401a-467f-bce4-5ad01f6c393c\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.798922 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-combined-ca-bundle\") pod \"874e2658-401a-467f-bce4-5ad01f6c393c\" (UID: \"874e2658-401a-467f-bce4-5ad01f6c393c\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.804368 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "874e2658-401a-467f-bce4-5ad01f6c393c" (UID: "874e2658-401a-467f-bce4-5ad01f6c393c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.805411 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874e2658-401a-467f-bce4-5ad01f6c393c-kube-api-access-km25f" (OuterVolumeSpecName: "kube-api-access-km25f") pod "874e2658-401a-467f-bce4-5ad01f6c393c" (UID: "874e2658-401a-467f-bce4-5ad01f6c393c"). InnerVolumeSpecName "kube-api-access-km25f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.828018 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" path="/var/lib/kubelet/pods/5dcb7aec-fdd4-4acb-9e80-64086bfe64c4/volumes" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.841365 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "874e2658-401a-467f-bce4-5ad01f6c393c" (UID: "874e2658-401a-467f-bce4-5ad01f6c393c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.863298 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data" (OuterVolumeSpecName: "config-data") pod "874e2658-401a-467f-bce4-5ad01f6c393c" (UID: "874e2658-401a-467f-bce4-5ad01f6c393c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.894701 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.902266 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data\") pod \"90b02279-5906-420f-9def-af822c4a6ff3\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.902527 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-combined-ca-bundle\") pod \"90b02279-5906-420f-9def-af822c4a6ff3\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.902672 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggwc4\" (UniqueName: \"kubernetes.io/projected/90b02279-5906-420f-9def-af822c4a6ff3-kube-api-access-ggwc4\") pod \"90b02279-5906-420f-9def-af822c4a6ff3\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.902706 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data-custom\") pod \"90b02279-5906-420f-9def-af822c4a6ff3\" (UID: \"90b02279-5906-420f-9def-af822c4a6ff3\") " Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.903238 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.903637 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.903658 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km25f\" (UniqueName: \"kubernetes.io/projected/874e2658-401a-467f-bce4-5ad01f6c393c-kube-api-access-km25f\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.903670 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874e2658-401a-467f-bce4-5ad01f6c393c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.908965 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b02279-5906-420f-9def-af822c4a6ff3-kube-api-access-ggwc4" (OuterVolumeSpecName: "kube-api-access-ggwc4") pod "90b02279-5906-420f-9def-af822c4a6ff3" (UID: "90b02279-5906-420f-9def-af822c4a6ff3"). InnerVolumeSpecName "kube-api-access-ggwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.920521 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "90b02279-5906-420f-9def-af822c4a6ff3" (UID: "90b02279-5906-420f-9def-af822c4a6ff3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.934669 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90b02279-5906-420f-9def-af822c4a6ff3" (UID: "90b02279-5906-420f-9def-af822c4a6ff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:42 crc kubenswrapper[4957]: I1128 21:12:42.965472 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data" (OuterVolumeSpecName: "config-data") pod "90b02279-5906-420f-9def-af822c4a6ff3" (UID: "90b02279-5906-420f-9def-af822c4a6ff3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.005109 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggwc4\" (UniqueName: \"kubernetes.io/projected/90b02279-5906-420f-9def-af822c4a6ff3-kube-api-access-ggwc4\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.005145 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.005155 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.005163 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b02279-5906-420f-9def-af822c4a6ff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.473394 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerStarted","Data":"ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32"} Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.474713 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" event={"ID":"90b02279-5906-420f-9def-af822c4a6ff3","Type":"ContainerDied","Data":"a2116ae3f37eaeada0f7e9a28e25c33260e3af8ac9faf6b682b05028a72377ed"} Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.474750 4957 scope.go:117] "RemoveContainer" containerID="27be004b20a45e1a4b977cdb69be6fa1f2b85a75925c9c8aef7a3d9e47165226" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.474815 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4f676d-5jrq4" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.480315 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ccdb8cd88-jgq8q" event={"ID":"874e2658-401a-467f-bce4-5ad01f6c393c","Type":"ContainerDied","Data":"02e6edffb92b3393dfb4a97c799b9b5c8d74718fec7d7f5d80dbdf103f4cb2d8"} Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.480368 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ccdb8cd88-jgq8q" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.511538 4957 scope.go:117] "RemoveContainer" containerID="8cbaebf7acec6f66c554a987ddab899620dd0aacd7c9d83ea39df45fd53bb69d" Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.514863 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-78dd4f676d-5jrq4"] Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.530090 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-78dd4f676d-5jrq4"] Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.548856 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ccdb8cd88-jgq8q"] Nov 28 21:12:43 crc kubenswrapper[4957]: I1128 21:12:43.557818 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7ccdb8cd88-jgq8q"] Nov 28 21:12:44 crc kubenswrapper[4957]: I1128 21:12:44.832734 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" path="/var/lib/kubelet/pods/874e2658-401a-467f-bce4-5ad01f6c393c/volumes" Nov 28 21:12:44 crc kubenswrapper[4957]: I1128 21:12:44.833779 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b02279-5906-420f-9def-af822c4a6ff3" path="/var/lib/kubelet/pods/90b02279-5906-420f-9def-af822c4a6ff3/volumes" Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.520507 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerStarted","Data":"922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3"} Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.520673 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-central-agent" containerID="cri-o://69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa" gracePeriod=30 Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.520934 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="proxy-httpd" containerID="cri-o://922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3" gracePeriod=30 Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.521005 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-notification-agent" containerID="cri-o://71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957" gracePeriod=30 Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.521037 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="sg-core" containerID="cri-o://ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32" gracePeriod=30 Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.521084 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:12:45 crc kubenswrapper[4957]: I1128 21:12:45.557328 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.264509611 podStartE2EDuration="13.557310452s" podCreationTimestamp="2025-11-28 21:12:32 +0000 UTC" firstStartedPulling="2025-11-28 21:12:33.488605372 +0000 UTC m=+1392.957253281" lastFinishedPulling="2025-11-28 21:12:44.781406213 +0000 UTC m=+1404.250054122" observedRunningTime="2025-11-28 21:12:45.55071899 +0000 UTC m=+1405.019366899" watchObservedRunningTime="2025-11-28 21:12:45.557310452 +0000 UTC m=+1405.025958361" Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.011048 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.071062 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9c7ff8b44-mp2wj"] Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.071304 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-9c7ff8b44-mp2wj" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerName="heat-engine" containerID="cri-o://6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" gracePeriod=60 Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.537571 4957 generic.go:334] "Generic (PLEG): container finished" podID="bb332158-846f-4259-8bbb-fa93e3271e65" containerID="922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3" exitCode=0 Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.537636 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerDied","Data":"922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3"} Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.538078 4957 generic.go:334] "Generic (PLEG): container finished" podID="bb332158-846f-4259-8bbb-fa93e3271e65" containerID="ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32" exitCode=2 Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.538138 4957 generic.go:334] "Generic (PLEG): container finished" podID="bb332158-846f-4259-8bbb-fa93e3271e65" containerID="71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957" exitCode=0 Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.538115 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerDied","Data":"ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32"} Nov 28 21:12:46 crc kubenswrapper[4957]: I1128 21:12:46.538169 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerDied","Data":"71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957"} Nov 28 21:12:46 crc kubenswrapper[4957]: E1128 21:12:46.844437 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:12:46 crc kubenswrapper[4957]: E1128 21:12:46.846013 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:12:46 crc kubenswrapper[4957]: E1128 21:12:46.849562 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:12:46 crc kubenswrapper[4957]: E1128 21:12:46.849604 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-9c7ff8b44-mp2wj" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerName="heat-engine" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.295715 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.428456 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-run-httpd\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.428531 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-combined-ca-bundle\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.428672 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-config-data\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.428769 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-log-httpd\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.428828 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-scripts\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.428883 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb4nz\" (UniqueName: \"kubernetes.io/projected/bb332158-846f-4259-8bbb-fa93e3271e65-kube-api-access-wb4nz\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.429006 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-sg-core-conf-yaml\") pod \"bb332158-846f-4259-8bbb-fa93e3271e65\" (UID: \"bb332158-846f-4259-8bbb-fa93e3271e65\") " Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.429571 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.430725 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.431185 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.450059 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb332158-846f-4259-8bbb-fa93e3271e65-kube-api-access-wb4nz" (OuterVolumeSpecName: "kube-api-access-wb4nz") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "kube-api-access-wb4nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.454204 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-scripts" (OuterVolumeSpecName: "scripts") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.470789 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.532729 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb332158-846f-4259-8bbb-fa93e3271e65-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.532763 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.532773 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb4nz\" (UniqueName: \"kubernetes.io/projected/bb332158-846f-4259-8bbb-fa93e3271e65-kube-api-access-wb4nz\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.532784 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.539017 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.561900 4957 generic.go:334] "Generic (PLEG): container finished" podID="bb332158-846f-4259-8bbb-fa93e3271e65" containerID="69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa" exitCode=0 Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.561942 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerDied","Data":"69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa"} Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.561964 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.561969 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb332158-846f-4259-8bbb-fa93e3271e65","Type":"ContainerDied","Data":"f7c9641f4cddf1fd5d5f9eb6fbfc006428a2f0a11fe1bbf338c41f2a71e6f1ea"} Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.561981 4957 scope.go:117] "RemoveContainer" containerID="922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.567983 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-config-data" (OuterVolumeSpecName: "config-data") pod "bb332158-846f-4259-8bbb-fa93e3271e65" (UID: "bb332158-846f-4259-8bbb-fa93e3271e65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.587978 4957 scope.go:117] "RemoveContainer" containerID="ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.607255 4957 scope.go:117] "RemoveContainer" containerID="71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.630673 4957 scope.go:117] "RemoveContainer" containerID="69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.634897 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.634949 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb332158-846f-4259-8bbb-fa93e3271e65-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.655700 4957 scope.go:117] "RemoveContainer" containerID="922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.656612 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3\": container with ID starting with 922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3 not found: ID does not exist" containerID="922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.656660 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3"} err="failed to get container status \"922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3\": rpc error: code = NotFound desc = could not find container \"922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3\": container with ID starting with 922011e912a98b58d3c171f598bdf11a2413c82146a0c1eb57c70495a72c30b3 not found: ID does not exist" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.656698 4957 scope.go:117] "RemoveContainer" containerID="ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.657244 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32\": container with ID starting with ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32 not found: ID does not exist" containerID="ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.657316 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32"} err="failed to get container status \"ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32\": rpc error: code = NotFound desc = could not find container \"ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32\": container with ID starting with ebb80ebf3f57ae7978bf81ffd46cecd5e78b795d4eeb49ed9ac8a1de24de2e32 not found: ID does not exist" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.657352 4957 scope.go:117] "RemoveContainer" containerID="71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.657735 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957\": container with ID starting with 71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957 not found: ID does not exist" containerID="71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.657779 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957"} err="failed to get container status \"71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957\": rpc error: code = NotFound desc = could not find container \"71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957\": container with ID starting with 71103a28c7794ca9f6a17e629e4d901be5c17e8ae9d7f624d410c33b67d89957 not found: ID does not exist" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.657809 4957 scope.go:117] "RemoveContainer" containerID="69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.658087 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa\": container with ID starting with 69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa not found: ID does not exist" containerID="69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.658120 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa"} err="failed to get container status \"69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa\": rpc error: code = NotFound desc = could not find container \"69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa\": container with ID starting with 69345f3b94683b519467ff24ca3a7900469ca3f6eefdff11c81368e4bf72a5aa not found: ID does not exist" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.887126 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.897376 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.912515 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913001 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b02279-5906-420f-9def-af822c4a6ff3" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913018 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b02279-5906-420f-9def-af822c4a6ff3" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913035 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="sg-core" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913042 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="sg-core" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913058 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913065 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913075 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913081 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913089 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913095 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913115 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b02279-5906-420f-9def-af822c4a6ff3" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913121 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b02279-5906-420f-9def-af822c4a6ff3" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913135 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913140 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913153 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-notification-agent" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913159 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-notification-agent" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913172 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="proxy-httpd" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913177 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="proxy-httpd" Nov 28 21:12:48 crc kubenswrapper[4957]: E1128 21:12:48.913194 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-central-agent" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913199 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-central-agent" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913414 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913432 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f3217b-0b9a-4621-abe7-6e1e90b01f35" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913441 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="proxy-httpd" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913452 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dcb7aec-fdd4-4acb-9e80-64086bfe64c4" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913466 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b02279-5906-420f-9def-af822c4a6ff3" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913478 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="sg-core" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913489 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="874e2658-401a-467f-bce4-5ad01f6c393c" containerName="heat-api" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913500 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b02279-5906-420f-9def-af822c4a6ff3" containerName="heat-cfnapi" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913509 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-notification-agent" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.913524 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" containerName="ceilometer-central-agent" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.916303 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.923616 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.923838 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:12:48 crc kubenswrapper[4957]: I1128 21:12:48.934432 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043529 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043629 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-run-httpd\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043658 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-scripts\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043729 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfrx\" (UniqueName: \"kubernetes.io/projected/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-kube-api-access-9gfrx\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043870 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-config-data\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.043908 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-log-httpd\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.101322 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:49 crc kubenswrapper[4957]: E1128 21:12:49.102276 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-9gfrx log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.145742 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.145853 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-run-httpd\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.145878 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-scripts\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.145912 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gfrx\" (UniqueName: \"kubernetes.io/projected/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-kube-api-access-9gfrx\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.146005 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-config-data\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.146030 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-log-httpd\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.146082 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.146496 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-run-httpd\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.146537 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-log-httpd\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.156027 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.156069 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-scripts\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.156562 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.157065 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-config-data\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.161713 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gfrx\" (UniqueName: \"kubernetes.io/projected/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-kube-api-access-9gfrx\") pod \"ceilometer-0\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.573405 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.587224 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.758708 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-log-httpd\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.758820 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-scripts\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.759001 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-config-data\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.759162 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.759820 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-combined-ca-bundle\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.759851 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-run-httpd\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.759913 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-sg-core-conf-yaml\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.760066 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gfrx\" (UniqueName: \"kubernetes.io/projected/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-kube-api-access-9gfrx\") pod \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\" (UID: \"728f0ec2-aca5-43fd-b602-c0aa26c6b9ce\") " Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.760141 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.760972 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.760997 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.764327 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-scripts" (OuterVolumeSpecName: "scripts") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.764367 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-config-data" (OuterVolumeSpecName: "config-data") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.764417 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-kube-api-access-9gfrx" (OuterVolumeSpecName: "kube-api-access-9gfrx") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "kube-api-access-9gfrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.765446 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.775758 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" (UID: "728f0ec2-aca5-43fd-b602-c0aa26c6b9ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.862637 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gfrx\" (UniqueName: \"kubernetes.io/projected/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-kube-api-access-9gfrx\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.862666 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.862677 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.862687 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:49 crc kubenswrapper[4957]: I1128 21:12:49.862695 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.603791 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.692251 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.712274 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.725479 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.728002 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.741592 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.788421 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.790800 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.829566 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728f0ec2-aca5-43fd-b602-c0aa26c6b9ce" path="/var/lib/kubelet/pods/728f0ec2-aca5-43fd-b602-c0aa26c6b9ce/volumes" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.829995 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb332158-846f-4259-8bbb-fa93e3271e65" path="/var/lib/kubelet/pods/bb332158-846f-4259-8bbb-fa93e3271e65/volumes" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.898834 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-config-data\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.898919 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.898962 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.899055 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-log-httpd\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.899252 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-scripts\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.899328 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-run-httpd\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:50 crc kubenswrapper[4957]: I1128 21:12:50.899467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lr42\" (UniqueName: \"kubernetes.io/projected/b6eed030-8767-47ca-a5cb-abd34d8e3b71-kube-api-access-4lr42\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001288 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-run-httpd\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001399 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lr42\" (UniqueName: \"kubernetes.io/projected/b6eed030-8767-47ca-a5cb-abd34d8e3b71-kube-api-access-4lr42\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001468 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-config-data\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001516 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001618 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-log-httpd\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.001671 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-scripts\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.002910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-run-httpd\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.003173 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-log-httpd\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.007947 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-scripts\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.008648 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.008945 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-config-data\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.025122 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lr42\" (UniqueName: \"kubernetes.io/projected/b6eed030-8767-47ca-a5cb-abd34d8e3b71-kube-api-access-4lr42\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.025837 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.054319 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:12:51 crc kubenswrapper[4957]: I1128 21:12:51.623965 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:51 crc kubenswrapper[4957]: W1128 21:12:51.624783 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6eed030_8767_47ca_a5cb_abd34d8e3b71.slice/crio-96d92477cb626f612ecef28d846b3e3bf67b057a64437ead1a6ef967411a277a WatchSource:0}: Error finding container 96d92477cb626f612ecef28d846b3e3bf67b057a64437ead1a6ef967411a277a: Status 404 returned error can't find the container with id 96d92477cb626f612ecef28d846b3e3bf67b057a64437ead1a6ef967411a277a Nov 28 21:12:52 crc kubenswrapper[4957]: I1128 21:12:52.643062 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerStarted","Data":"4df6663d07345b6be4daad7e5b61ee448bd1f81594ab7ab60a1a34ec28a445c2"} Nov 28 21:12:52 crc kubenswrapper[4957]: I1128 21:12:52.643576 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerStarted","Data":"96d92477cb626f612ecef28d846b3e3bf67b057a64437ead1a6ef967411a277a"} Nov 28 21:12:53 crc kubenswrapper[4957]: I1128 21:12:53.655616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerStarted","Data":"dbbc173d8b3cd946b867bb5224ebd9fcfc29e6749db8fa51d13aaa81f2261c6c"} Nov 28 21:12:53 crc kubenswrapper[4957]: I1128 21:12:53.659583 4957 generic.go:334] "Generic (PLEG): container finished" podID="efb8e30f-337f-4de0-8508-486479b41e97" containerID="2d3bdeafa902ea5b5dda20833a9bc5656494f5f15558017bdf63d1f6c82b78ef" exitCode=0 Nov 28 21:12:53 crc kubenswrapper[4957]: I1128 21:12:53.659825 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" event={"ID":"efb8e30f-337f-4de0-8508-486479b41e97","Type":"ContainerDied","Data":"2d3bdeafa902ea5b5dda20833a9bc5656494f5f15558017bdf63d1f6c82b78ef"} Nov 28 21:12:54 crc kubenswrapper[4957]: I1128 21:12:54.672777 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerStarted","Data":"a68bbeba19d7bb380cb8a4fbdbb68b7dfacef3f42834b1c90b103934215df925"} Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.157323 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.300376 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-config-data\") pod \"efb8e30f-337f-4de0-8508-486479b41e97\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.300477 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-scripts\") pod \"efb8e30f-337f-4de0-8508-486479b41e97\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.300585 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-combined-ca-bundle\") pod \"efb8e30f-337f-4de0-8508-486479b41e97\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.300735 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmw8v\" (UniqueName: \"kubernetes.io/projected/efb8e30f-337f-4de0-8508-486479b41e97-kube-api-access-vmw8v\") pod \"efb8e30f-337f-4de0-8508-486479b41e97\" (UID: \"efb8e30f-337f-4de0-8508-486479b41e97\") " Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.306162 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb8e30f-337f-4de0-8508-486479b41e97-kube-api-access-vmw8v" (OuterVolumeSpecName: "kube-api-access-vmw8v") pod "efb8e30f-337f-4de0-8508-486479b41e97" (UID: "efb8e30f-337f-4de0-8508-486479b41e97"). InnerVolumeSpecName "kube-api-access-vmw8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.331349 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-scripts" (OuterVolumeSpecName: "scripts") pod "efb8e30f-337f-4de0-8508-486479b41e97" (UID: "efb8e30f-337f-4de0-8508-486479b41e97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.336355 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-config-data" (OuterVolumeSpecName: "config-data") pod "efb8e30f-337f-4de0-8508-486479b41e97" (UID: "efb8e30f-337f-4de0-8508-486479b41e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.341225 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efb8e30f-337f-4de0-8508-486479b41e97" (UID: "efb8e30f-337f-4de0-8508-486479b41e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.403908 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.403942 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.403955 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb8e30f-337f-4de0-8508-486479b41e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.403968 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmw8v\" (UniqueName: \"kubernetes.io/projected/efb8e30f-337f-4de0-8508-486479b41e97-kube-api-access-vmw8v\") on node \"crc\" DevicePath \"\"" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.686929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerStarted","Data":"dc83f88dba2540240f257c3753ece7d5d3f3c2f43028ef2d93bd467f24b3c339"} Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.687929 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.689076 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" event={"ID":"efb8e30f-337f-4de0-8508-486479b41e97","Type":"ContainerDied","Data":"fca2f043191946cc477ff02790e61e601f73ac6a880f4e54fe86d92a1ed4d5f4"} Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.689151 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca2f043191946cc477ff02790e61e601f73ac6a880f4e54fe86d92a1ed4d5f4" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.689179 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dnz8n" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.723476 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.391265311 podStartE2EDuration="5.72345673s" podCreationTimestamp="2025-11-28 21:12:50 +0000 UTC" firstStartedPulling="2025-11-28 21:12:51.636232164 +0000 UTC m=+1411.104880073" lastFinishedPulling="2025-11-28 21:12:54.968423583 +0000 UTC m=+1414.437071492" observedRunningTime="2025-11-28 21:12:55.711362762 +0000 UTC m=+1415.180010671" watchObservedRunningTime="2025-11-28 21:12:55.72345673 +0000 UTC m=+1415.192104639" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.795217 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 21:12:55 crc kubenswrapper[4957]: E1128 21:12:55.795671 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb8e30f-337f-4de0-8508-486479b41e97" containerName="nova-cell0-conductor-db-sync" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.795686 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb8e30f-337f-4de0-8508-486479b41e97" containerName="nova-cell0-conductor-db-sync" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.795905 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb8e30f-337f-4de0-8508-486479b41e97" containerName="nova-cell0-conductor-db-sync" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.796648 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.807499 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rlz2c" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.808227 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.845019 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.914053 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9mz5\" (UniqueName: \"kubernetes.io/projected/22f8d9b1-ab89-42ad-8872-320873c45110-kube-api-access-r9mz5\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.915322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f8d9b1-ab89-42ad-8872-320873c45110-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:55 crc kubenswrapper[4957]: I1128 21:12:55.915529 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22f8d9b1-ab89-42ad-8872-320873c45110-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.018001 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9mz5\" (UniqueName: \"kubernetes.io/projected/22f8d9b1-ab89-42ad-8872-320873c45110-kube-api-access-r9mz5\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.018172 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f8d9b1-ab89-42ad-8872-320873c45110-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.018191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22f8d9b1-ab89-42ad-8872-320873c45110-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.021953 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f8d9b1-ab89-42ad-8872-320873c45110-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.035861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22f8d9b1-ab89-42ad-8872-320873c45110-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.048728 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9mz5\" (UniqueName: \"kubernetes.io/projected/22f8d9b1-ab89-42ad-8872-320873c45110-kube-api-access-r9mz5\") pod \"nova-cell0-conductor-0\" (UID: \"22f8d9b1-ab89-42ad-8872-320873c45110\") " pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.115716 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:56 crc kubenswrapper[4957]: I1128 21:12:56.708956 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 21:12:56 crc kubenswrapper[4957]: E1128 21:12:56.846065 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:12:56 crc kubenswrapper[4957]: E1128 21:12:56.847234 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:12:56 crc kubenswrapper[4957]: E1128 21:12:56.852367 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:12:56 crc kubenswrapper[4957]: E1128 21:12:56.852414 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-9c7ff8b44-mp2wj" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerName="heat-engine" Nov 28 21:12:57 crc kubenswrapper[4957]: I1128 21:12:57.713658 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22f8d9b1-ab89-42ad-8872-320873c45110","Type":"ContainerStarted","Data":"107b997389c5f51f10d807caf0eb80c6e42196917f8f19c5f451a9e1d1d0df8f"} Nov 28 21:12:57 crc kubenswrapper[4957]: I1128 21:12:57.714848 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22f8d9b1-ab89-42ad-8872-320873c45110","Type":"ContainerStarted","Data":"7a175d41c5deb74b8c7b3ebce89a65480234469756f09b7b0f576c09d4670a57"} Nov 28 21:12:57 crc kubenswrapper[4957]: I1128 21:12:57.716370 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.146916 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.1468956519999995 podStartE2EDuration="4.146895652s" podCreationTimestamp="2025-11-28 21:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:12:57.742643591 +0000 UTC m=+1417.211291500" watchObservedRunningTime="2025-11-28 21:12:59.146895652 +0000 UTC m=+1418.615543561" Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.148903 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.149123 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-central-agent" containerID="cri-o://4df6663d07345b6be4daad7e5b61ee448bd1f81594ab7ab60a1a34ec28a445c2" gracePeriod=30 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.150319 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="proxy-httpd" containerID="cri-o://dc83f88dba2540240f257c3753ece7d5d3f3c2f43028ef2d93bd467f24b3c339" gracePeriod=30 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.150418 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-notification-agent" containerID="cri-o://dbbc173d8b3cd946b867bb5224ebd9fcfc29e6749db8fa51d13aaa81f2261c6c" gracePeriod=30 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.150459 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="sg-core" containerID="cri-o://a68bbeba19d7bb380cb8a4fbdbb68b7dfacef3f42834b1c90b103934215df925" gracePeriod=30 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.741233 4957 generic.go:334] "Generic (PLEG): container finished" podID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" exitCode=0 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.741246 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c7ff8b44-mp2wj" event={"ID":"2b5d38cb-30e7-40d2-9c78-a882bd723332","Type":"ContainerDied","Data":"6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2"} Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.746048 4957 generic.go:334] "Generic (PLEG): container finished" podID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerID="dc83f88dba2540240f257c3753ece7d5d3f3c2f43028ef2d93bd467f24b3c339" exitCode=0 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.746082 4957 generic.go:334] "Generic (PLEG): container finished" podID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerID="a68bbeba19d7bb380cb8a4fbdbb68b7dfacef3f42834b1c90b103934215df925" exitCode=2 Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.746122 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerDied","Data":"dc83f88dba2540240f257c3753ece7d5d3f3c2f43028ef2d93bd467f24b3c339"} Nov 28 21:12:59 crc kubenswrapper[4957]: I1128 21:12:59.746155 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerDied","Data":"a68bbeba19d7bb380cb8a4fbdbb68b7dfacef3f42834b1c90b103934215df925"} Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.076503 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.233328 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-combined-ca-bundle\") pod \"2b5d38cb-30e7-40d2-9c78-a882bd723332\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.233382 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data\") pod \"2b5d38cb-30e7-40d2-9c78-a882bd723332\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.233455 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz4nz\" (UniqueName: \"kubernetes.io/projected/2b5d38cb-30e7-40d2-9c78-a882bd723332-kube-api-access-rz4nz\") pod \"2b5d38cb-30e7-40d2-9c78-a882bd723332\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.233580 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data-custom\") pod \"2b5d38cb-30e7-40d2-9c78-a882bd723332\" (UID: \"2b5d38cb-30e7-40d2-9c78-a882bd723332\") " Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.239543 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5d38cb-30e7-40d2-9c78-a882bd723332-kube-api-access-rz4nz" (OuterVolumeSpecName: "kube-api-access-rz4nz") pod "2b5d38cb-30e7-40d2-9c78-a882bd723332" (UID: "2b5d38cb-30e7-40d2-9c78-a882bd723332"). InnerVolumeSpecName "kube-api-access-rz4nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.242006 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b5d38cb-30e7-40d2-9c78-a882bd723332" (UID: "2b5d38cb-30e7-40d2-9c78-a882bd723332"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.286408 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b5d38cb-30e7-40d2-9c78-a882bd723332" (UID: "2b5d38cb-30e7-40d2-9c78-a882bd723332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.291822 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data" (OuterVolumeSpecName: "config-data") pod "2b5d38cb-30e7-40d2-9c78-a882bd723332" (UID: "2b5d38cb-30e7-40d2-9c78-a882bd723332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.335937 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.335982 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.335996 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz4nz\" (UniqueName: \"kubernetes.io/projected/2b5d38cb-30e7-40d2-9c78-a882bd723332-kube-api-access-rz4nz\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.336009 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5d38cb-30e7-40d2-9c78-a882bd723332-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.758408 4957 generic.go:334] "Generic (PLEG): container finished" podID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerID="dbbc173d8b3cd946b867bb5224ebd9fcfc29e6749db8fa51d13aaa81f2261c6c" exitCode=0 Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.758470 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerDied","Data":"dbbc173d8b3cd946b867bb5224ebd9fcfc29e6749db8fa51d13aaa81f2261c6c"} Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.764042 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c7ff8b44-mp2wj" event={"ID":"2b5d38cb-30e7-40d2-9c78-a882bd723332","Type":"ContainerDied","Data":"91eb025c33b07833c266a50bfa6bcdbd322a0c9ff467dcd400e805d565983ca1"} Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.764120 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c7ff8b44-mp2wj" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.764377 4957 scope.go:117] "RemoveContainer" containerID="6d298dc5d42b0f5a3df27ceaa4d3fa2ebb95446c9f59c3d0fd5ed047f18f2fa2" Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.797095 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9c7ff8b44-mp2wj"] Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.811618 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-9c7ff8b44-mp2wj"] Nov 28 21:13:00 crc kubenswrapper[4957]: I1128 21:13:00.835156 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" path="/var/lib/kubelet/pods/2b5d38cb-30e7-40d2-9c78-a882bd723332/volumes" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.149016 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.772838 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bfcmv"] Nov 28 21:13:01 crc kubenswrapper[4957]: E1128 21:13:01.773531 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerName="heat-engine" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.773543 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerName="heat-engine" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.773779 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5d38cb-30e7-40d2-9c78-a882bd723332" containerName="heat-engine" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.774501 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.776850 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.776853 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.841019 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bfcmv"] Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.866370 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsp4j\" (UniqueName: \"kubernetes.io/projected/06fae7da-8394-458c-ad75-2095913be98f-kube-api-access-hsp4j\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.866474 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-config-data\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.866518 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-scripts\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.866781 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.971546 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsp4j\" (UniqueName: \"kubernetes.io/projected/06fae7da-8394-458c-ad75-2095913be98f-kube-api-access-hsp4j\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.971650 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-config-data\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.971707 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-scripts\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.971801 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.973656 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.975684 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.980834 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.981856 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-scripts\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.983496 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-config-data\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:01 crc kubenswrapper[4957]: I1128 21:13:01.996831 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.008941 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.043396 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.044903 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.046840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsp4j\" (UniqueName: \"kubernetes.io/projected/06fae7da-8394-458c-ad75-2095913be98f-kube-api-access-hsp4j\") pod \"nova-cell0-cell-mapping-bfcmv\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.054967 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.073322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-config-data\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.073364 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f47a4f90-c533-4d50-81e2-5958a92b501c-logs\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.073528 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqlr\" (UniqueName: \"kubernetes.io/projected/f47a4f90-c533-4d50-81e2-5958a92b501c-kube-api-access-4qqlr\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.073572 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.091076 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.091407 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.146924 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.149312 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.152852 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.165808 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181372 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfr7h\" (UniqueName: \"kubernetes.io/projected/afa8fb39-dda9-40a7-b1c2-aa2d34263620-kube-api-access-wfr7h\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181464 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181486 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-config-data\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181515 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqlr\" (UniqueName: \"kubernetes.io/projected/f47a4f90-c533-4d50-81e2-5958a92b501c-kube-api-access-4qqlr\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181560 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181620 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-config-data\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.181650 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f47a4f90-c533-4d50-81e2-5958a92b501c-logs\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.182386 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f47a4f90-c533-4d50-81e2-5958a92b501c-logs\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.194725 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.199880 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-config-data\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.253707 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sgfk6"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.255902 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.280857 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqlr\" (UniqueName: \"kubernetes.io/projected/f47a4f90-c533-4d50-81e2-5958a92b501c-kube-api-access-4qqlr\") pod \"nova-api-0\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.285478 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.286908 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857bc68-73db-45e7-a4ee-e8456edae6f2-logs\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.287066 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfr7h\" (UniqueName: \"kubernetes.io/projected/afa8fb39-dda9-40a7-b1c2-aa2d34263620-kube-api-access-wfr7h\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.287311 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4k9s\" (UniqueName: \"kubernetes.io/projected/d857bc68-73db-45e7-a4ee-e8456edae6f2-kube-api-access-k4k9s\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.287423 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.287544 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-config-data\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.287685 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-config-data\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.287779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.291991 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-config-data\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.294606 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.350353 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sgfk6"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.367628 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfr7h\" (UniqueName: \"kubernetes.io/projected/afa8fb39-dda9-40a7-b1c2-aa2d34263620-kube-api-access-wfr7h\") pod \"nova-scheduler-0\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392726 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392758 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-config\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392807 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392829 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4k9s\" (UniqueName: \"kubernetes.io/projected/d857bc68-73db-45e7-a4ee-e8456edae6f2-kube-api-access-k4k9s\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392928 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-config-data\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.392966 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.393001 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8pq\" (UniqueName: \"kubernetes.io/projected/d9611510-ca79-4786-9ae4-de71a9238443-kube-api-access-ck8pq\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.393104 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857bc68-73db-45e7-a4ee-e8456edae6f2-logs\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.393901 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857bc68-73db-45e7-a4ee-e8456edae6f2-logs\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.404426 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.432990 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-config-data\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.435000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4k9s\" (UniqueName: \"kubernetes.io/projected/d857bc68-73db-45e7-a4ee-e8456edae6f2-kube-api-access-k4k9s\") pod \"nova-metadata-0\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.462270 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.463828 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.472637 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.495755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.495801 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.495828 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-config\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.495855 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.495871 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.495966 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck8pq\" (UniqueName: \"kubernetes.io/projected/d9611510-ca79-4786-9ae4-de71a9238443-kube-api-access-ck8pq\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.497147 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.497694 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.498878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-config\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.499593 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.500195 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.516232 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.529383 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck8pq\" (UniqueName: \"kubernetes.io/projected/d9611510-ca79-4786-9ae4-de71a9238443-kube-api-access-ck8pq\") pod \"dnsmasq-dns-9b86998b5-sgfk6\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.599956 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.600120 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.600231 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxz8\" (UniqueName: \"kubernetes.io/projected/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-kube-api-access-gjxz8\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.618931 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.655781 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.692680 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.703401 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.704308 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxz8\" (UniqueName: \"kubernetes.io/projected/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-kube-api-access-gjxz8\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.704509 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.714161 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.718959 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.744504 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxz8\" (UniqueName: \"kubernetes.io/projected/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-kube-api-access-gjxz8\") pod \"nova-cell1-novncproxy-0\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:02 crc kubenswrapper[4957]: I1128 21:13:02.859297 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:02.999222 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bfcmv"] Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.396713 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.723042 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.737571 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.749349 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sgfk6"] Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.829069 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bfcmv" event={"ID":"06fae7da-8394-458c-ad75-2095913be98f","Type":"ContainerStarted","Data":"eee54a860d64bb8e3407886beababbd736c5b3052b4dbd4efb012b7d7351ee12"} Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.829111 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bfcmv" event={"ID":"06fae7da-8394-458c-ad75-2095913be98f","Type":"ContainerStarted","Data":"8ee8a95ef4d8afe3ec8f871c6604cc958c859f876dda8ad24030dc0c127fd71d"} Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.837764 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" event={"ID":"d9611510-ca79-4786-9ae4-de71a9238443","Type":"ContainerStarted","Data":"0550f8d846092e62c77f0ab3d39fc7fb3661f99a7bc7e25c9bfdbeaa39fb9b13"} Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.846000 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f47a4f90-c533-4d50-81e2-5958a92b501c","Type":"ContainerStarted","Data":"a43be73fc323209e9c656fb84aafa38e27f6a5b8cca5fd73def7fb01b6250556"} Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.849335 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.851933 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bfcmv" podStartSLOduration=2.851915608 podStartE2EDuration="2.851915608s" podCreationTimestamp="2025-11-28 21:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:03.845825409 +0000 UTC m=+1423.314473318" watchObservedRunningTime="2025-11-28 21:13:03.851915608 +0000 UTC m=+1423.320563507" Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.856126 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afa8fb39-dda9-40a7-b1c2-aa2d34263620","Type":"ContainerStarted","Data":"d2a935b76542f0b7c757ad163c41385237bdffa3b5238d21c3b26dcaa32a552e"} Nov 28 21:13:03 crc kubenswrapper[4957]: I1128 21:13:03.858997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d857bc68-73db-45e7-a4ee-e8456edae6f2","Type":"ContainerStarted","Data":"ab79b2ffc4b6711a3d16288f7949125e6ca3fdc3d6ed825b4807f748d05c337c"} Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.100888 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf4ml"] Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.103954 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.106830 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.111330 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.140063 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf4ml"] Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.204794 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdfqb\" (UniqueName: \"kubernetes.io/projected/5f8491fd-e8b6-4c13-af22-ab895bf882a4-kube-api-access-pdfqb\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.204857 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.204882 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-config-data\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.204931 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-scripts\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.307279 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdfqb\" (UniqueName: \"kubernetes.io/projected/5f8491fd-e8b6-4c13-af22-ab895bf882a4-kube-api-access-pdfqb\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.307351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.307384 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-config-data\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.307442 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-scripts\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.323552 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-config-data\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.329907 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-scripts\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.329932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdfqb\" (UniqueName: \"kubernetes.io/projected/5f8491fd-e8b6-4c13-af22-ab895bf882a4-kube-api-access-pdfqb\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.340902 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jf4ml\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.481983 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.873169 4957 generic.go:334] "Generic (PLEG): container finished" podID="d9611510-ca79-4786-9ae4-de71a9238443" containerID="a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873" exitCode=0 Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.873504 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" event={"ID":"d9611510-ca79-4786-9ae4-de71a9238443","Type":"ContainerDied","Data":"a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873"} Nov 28 21:13:04 crc kubenswrapper[4957]: I1128 21:13:04.878929 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea","Type":"ContainerStarted","Data":"98465e79232aa091c40688a5b1bc08539c648a80beeb36cac5e9105686f0b55e"} Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.121287 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf4ml"] Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.920472 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" event={"ID":"5f8491fd-e8b6-4c13-af22-ab895bf882a4","Type":"ContainerStarted","Data":"9b0f328702a3d648a51e8cd5c4d03439cffb9d35fbb4809a967495077475f005"} Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.921094 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" event={"ID":"5f8491fd-e8b6-4c13-af22-ab895bf882a4","Type":"ContainerStarted","Data":"28f4539cffe170ceba360d32922e8e28a354a9a45d4cb98c384ece011e5a6598"} Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.928597 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" event={"ID":"d9611510-ca79-4786-9ae4-de71a9238443","Type":"ContainerStarted","Data":"c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee"} Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.929345 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.942467 4957 generic.go:334] "Generic (PLEG): container finished" podID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerID="4df6663d07345b6be4daad7e5b61ee448bd1f81594ab7ab60a1a34ec28a445c2" exitCode=0 Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.942510 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerDied","Data":"4df6663d07345b6be4daad7e5b61ee448bd1f81594ab7ab60a1a34ec28a445c2"} Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.945689 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" podStartSLOduration=1.945665115 podStartE2EDuration="1.945665115s" podCreationTimestamp="2025-11-28 21:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:05.938694183 +0000 UTC m=+1425.407342092" watchObservedRunningTime="2025-11-28 21:13:05.945665115 +0000 UTC m=+1425.414313024" Nov 28 21:13:05 crc kubenswrapper[4957]: I1128 21:13:05.971133 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" podStartSLOduration=3.97111436 podStartE2EDuration="3.97111436s" podCreationTimestamp="2025-11-28 21:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:05.963919313 +0000 UTC m=+1425.432567252" watchObservedRunningTime="2025-11-28 21:13:05.97111436 +0000 UTC m=+1425.439762259" Nov 28 21:13:06 crc kubenswrapper[4957]: I1128 21:13:06.260518 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:06 crc kubenswrapper[4957]: I1128 21:13:06.275379 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.001961 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.002380 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.003613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d857bc68-73db-45e7-a4ee-e8456edae6f2","Type":"ContainerStarted","Data":"c17caa71c9f59cafcd5c7720a95b22456f8acd5e53d776ec14f5a417d745977b"} Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.040683 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.182905 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lr42\" (UniqueName: \"kubernetes.io/projected/b6eed030-8767-47ca-a5cb-abd34d8e3b71-kube-api-access-4lr42\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.183087 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-run-httpd\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.183289 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-combined-ca-bundle\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.183371 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-scripts\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.183501 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-sg-core-conf-yaml\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.183561 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-log-httpd\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.183588 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-config-data\") pod \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\" (UID: \"b6eed030-8767-47ca-a5cb-abd34d8e3b71\") " Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.185453 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.185746 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.191538 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-scripts" (OuterVolumeSpecName: "scripts") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.192367 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6eed030-8767-47ca-a5cb-abd34d8e3b71-kube-api-access-4lr42" (OuterVolumeSpecName: "kube-api-access-4lr42") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "kube-api-access-4lr42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.248114 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.286542 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.286574 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.286583 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.286591 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b6eed030-8767-47ca-a5cb-abd34d8e3b71-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.286600 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lr42\" (UniqueName: \"kubernetes.io/projected/b6eed030-8767-47ca-a5cb-abd34d8e3b71-kube-api-access-4lr42\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.448897 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.472387 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-config-data" (OuterVolumeSpecName: "config-data") pod "b6eed030-8767-47ca-a5cb-abd34d8e3b71" (UID: "b6eed030-8767-47ca-a5cb-abd34d8e3b71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.490701 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:09 crc kubenswrapper[4957]: I1128 21:13:09.490742 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6eed030-8767-47ca-a5cb-abd34d8e3b71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.016722 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b6eed030-8767-47ca-a5cb-abd34d8e3b71","Type":"ContainerDied","Data":"96d92477cb626f612ecef28d846b3e3bf67b057a64437ead1a6ef967411a277a"} Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.017056 4957 scope.go:117] "RemoveContainer" containerID="dc83f88dba2540240f257c3753ece7d5d3f3c2f43028ef2d93bd467f24b3c339" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.016775 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.021726 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d857bc68-73db-45e7-a4ee-e8456edae6f2","Type":"ContainerStarted","Data":"c44cbd4a5f8cfb1fb94051b61ddbf980c9f9dae0d82a2e6191dcd4bf126a6163"} Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.021932 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-log" containerID="cri-o://c17caa71c9f59cafcd5c7720a95b22456f8acd5e53d776ec14f5a417d745977b" gracePeriod=30 Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.021963 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-metadata" containerID="cri-o://c44cbd4a5f8cfb1fb94051b61ddbf980c9f9dae0d82a2e6191dcd4bf126a6163" gracePeriod=30 Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.027971 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3" gracePeriod=30 Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.028189 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea","Type":"ContainerStarted","Data":"c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3"} Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.032794 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f47a4f90-c533-4d50-81e2-5958a92b501c","Type":"ContainerStarted","Data":"708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438"} Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.032837 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f47a4f90-c533-4d50-81e2-5958a92b501c","Type":"ContainerStarted","Data":"4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245"} Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.037861 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afa8fb39-dda9-40a7-b1c2-aa2d34263620","Type":"ContainerStarted","Data":"100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629"} Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.057662 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.17601473 podStartE2EDuration="8.05764259s" podCreationTimestamp="2025-11-28 21:13:02 +0000 UTC" firstStartedPulling="2025-11-28 21:13:03.730563134 +0000 UTC m=+1423.199211043" lastFinishedPulling="2025-11-28 21:13:08.612190994 +0000 UTC m=+1428.080838903" observedRunningTime="2025-11-28 21:13:10.049439678 +0000 UTC m=+1429.518087597" watchObservedRunningTime="2025-11-28 21:13:10.05764259 +0000 UTC m=+1429.526290499" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.059665 4957 scope.go:117] "RemoveContainer" containerID="a68bbeba19d7bb380cb8a4fbdbb68b7dfacef3f42834b1c90b103934215df925" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.093486 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.359183074 podStartE2EDuration="8.093463281s" podCreationTimestamp="2025-11-28 21:13:02 +0000 UTC" firstStartedPulling="2025-11-28 21:13:03.871433628 +0000 UTC m=+1423.340081537" lastFinishedPulling="2025-11-28 21:13:08.605713835 +0000 UTC m=+1428.074361744" observedRunningTime="2025-11-28 21:13:10.068968719 +0000 UTC m=+1429.537616628" watchObservedRunningTime="2025-11-28 21:13:10.093463281 +0000 UTC m=+1429.562111190" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.097004 4957 scope.go:117] "RemoveContainer" containerID="dbbc173d8b3cd946b867bb5224ebd9fcfc29e6749db8fa51d13aaa81f2261c6c" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.100705 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.905656431 podStartE2EDuration="9.100682138s" podCreationTimestamp="2025-11-28 21:13:01 +0000 UTC" firstStartedPulling="2025-11-28 21:13:03.410149695 +0000 UTC m=+1422.878797604" lastFinishedPulling="2025-11-28 21:13:08.605175402 +0000 UTC m=+1428.073823311" observedRunningTime="2025-11-28 21:13:10.088290594 +0000 UTC m=+1429.556938493" watchObservedRunningTime="2025-11-28 21:13:10.100682138 +0000 UTC m=+1429.569330067" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.129366 4957 scope.go:117] "RemoveContainer" containerID="4df6663d07345b6be4daad7e5b61ee448bd1f81594ab7ab60a1a34ec28a445c2" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.143341 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.188037 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.198247 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.321183999 podStartE2EDuration="9.198229797s" podCreationTimestamp="2025-11-28 21:13:01 +0000 UTC" firstStartedPulling="2025-11-28 21:13:03.727398976 +0000 UTC m=+1423.196046885" lastFinishedPulling="2025-11-28 21:13:08.604444774 +0000 UTC m=+1428.073092683" observedRunningTime="2025-11-28 21:13:10.122022343 +0000 UTC m=+1429.590670252" watchObservedRunningTime="2025-11-28 21:13:10.198229797 +0000 UTC m=+1429.666877706" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.227437 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:10 crc kubenswrapper[4957]: E1128 21:13:10.228102 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-notification-agent" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228129 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-notification-agent" Nov 28 21:13:10 crc kubenswrapper[4957]: E1128 21:13:10.228162 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="sg-core" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228171 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="sg-core" Nov 28 21:13:10 crc kubenswrapper[4957]: E1128 21:13:10.228278 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-central-agent" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228291 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-central-agent" Nov 28 21:13:10 crc kubenswrapper[4957]: E1128 21:13:10.228318 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="proxy-httpd" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228327 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="proxy-httpd" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228615 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="proxy-httpd" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228652 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="sg-core" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228669 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-central-agent" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.228682 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" containerName="ceilometer-notification-agent" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.231488 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.234449 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.236794 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.245977 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.316922 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-log-httpd\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.316978 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-run-httpd\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.317013 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6c5x\" (UniqueName: \"kubernetes.io/projected/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-kube-api-access-t6c5x\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.317035 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.317062 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-scripts\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.317280 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.317514 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-config-data\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419338 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-config-data\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419467 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-log-httpd\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419504 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-run-httpd\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419530 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6c5x\" (UniqueName: \"kubernetes.io/projected/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-kube-api-access-t6c5x\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419556 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-scripts\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.419626 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.420190 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-run-httpd\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.420552 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-log-httpd\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.424860 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.425783 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.426025 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-config-data\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.437437 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-scripts\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.440511 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6c5x\" (UniqueName: \"kubernetes.io/projected/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-kube-api-access-t6c5x\") pod \"ceilometer-0\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.573669 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:10 crc kubenswrapper[4957]: I1128 21:13:10.850688 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6eed030-8767-47ca-a5cb-abd34d8e3b71" path="/var/lib/kubelet/pods/b6eed030-8767-47ca-a5cb-abd34d8e3b71/volumes" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.070216 4957 generic.go:334] "Generic (PLEG): container finished" podID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerID="c44cbd4a5f8cfb1fb94051b61ddbf980c9f9dae0d82a2e6191dcd4bf126a6163" exitCode=0 Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.070260 4957 generic.go:334] "Generic (PLEG): container finished" podID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerID="c17caa71c9f59cafcd5c7720a95b22456f8acd5e53d776ec14f5a417d745977b" exitCode=143 Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.070560 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d857bc68-73db-45e7-a4ee-e8456edae6f2","Type":"ContainerDied","Data":"c44cbd4a5f8cfb1fb94051b61ddbf980c9f9dae0d82a2e6191dcd4bf126a6163"} Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.070694 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d857bc68-73db-45e7-a4ee-e8456edae6f2","Type":"ContainerDied","Data":"c17caa71c9f59cafcd5c7720a95b22456f8acd5e53d776ec14f5a417d745977b"} Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.150730 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.459858 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.550701 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4k9s\" (UniqueName: \"kubernetes.io/projected/d857bc68-73db-45e7-a4ee-e8456edae6f2-kube-api-access-k4k9s\") pod \"d857bc68-73db-45e7-a4ee-e8456edae6f2\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.550802 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857bc68-73db-45e7-a4ee-e8456edae6f2-logs\") pod \"d857bc68-73db-45e7-a4ee-e8456edae6f2\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.550882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-combined-ca-bundle\") pod \"d857bc68-73db-45e7-a4ee-e8456edae6f2\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.550970 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-config-data\") pod \"d857bc68-73db-45e7-a4ee-e8456edae6f2\" (UID: \"d857bc68-73db-45e7-a4ee-e8456edae6f2\") " Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.554920 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d857bc68-73db-45e7-a4ee-e8456edae6f2-logs" (OuterVolumeSpecName: "logs") pod "d857bc68-73db-45e7-a4ee-e8456edae6f2" (UID: "d857bc68-73db-45e7-a4ee-e8456edae6f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.568277 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d857bc68-73db-45e7-a4ee-e8456edae6f2-kube-api-access-k4k9s" (OuterVolumeSpecName: "kube-api-access-k4k9s") pod "d857bc68-73db-45e7-a4ee-e8456edae6f2" (UID: "d857bc68-73db-45e7-a4ee-e8456edae6f2"). InnerVolumeSpecName "kube-api-access-k4k9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.641520 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d857bc68-73db-45e7-a4ee-e8456edae6f2" (UID: "d857bc68-73db-45e7-a4ee-e8456edae6f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.654535 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4k9s\" (UniqueName: \"kubernetes.io/projected/d857bc68-73db-45e7-a4ee-e8456edae6f2-kube-api-access-k4k9s\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.654563 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857bc68-73db-45e7-a4ee-e8456edae6f2-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.654573 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.667530 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-config-data" (OuterVolumeSpecName: "config-data") pod "d857bc68-73db-45e7-a4ee-e8456edae6f2" (UID: "d857bc68-73db-45e7-a4ee-e8456edae6f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.707341 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:11 crc kubenswrapper[4957]: I1128 21:13:11.756600 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857bc68-73db-45e7-a4ee-e8456edae6f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.084768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerStarted","Data":"13815cf7d09601f737e420716013e18a1eadd8b7d391f1d5bf56dc9825525bed"} Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.084839 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerStarted","Data":"d2dbbecd8672df274f62ea36a61b7ef3816bcc73aff08e2309ca80a04b3b6b97"} Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.087633 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d857bc68-73db-45e7-a4ee-e8456edae6f2","Type":"ContainerDied","Data":"ab79b2ffc4b6711a3d16288f7949125e6ca3fdc3d6ed825b4807f748d05c337c"} Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.087705 4957 scope.go:117] "RemoveContainer" containerID="c44cbd4a5f8cfb1fb94051b61ddbf980c9f9dae0d82a2e6191dcd4bf126a6163" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.087738 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.132925 4957 scope.go:117] "RemoveContainer" containerID="c17caa71c9f59cafcd5c7720a95b22456f8acd5e53d776ec14f5a417d745977b" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.140990 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.161753 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.175230 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:12 crc kubenswrapper[4957]: E1128 21:13:12.176094 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-log" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.176119 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-log" Nov 28 21:13:12 crc kubenswrapper[4957]: E1128 21:13:12.176138 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-metadata" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.176149 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-metadata" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.176695 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-log" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.176754 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" containerName="nova-metadata-metadata" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.178611 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.183457 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.183523 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.185827 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.266684 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrm7x\" (UniqueName: \"kubernetes.io/projected/ccf1381d-7243-48f7-b3ea-682c89619ea7-kube-api-access-zrm7x\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.266976 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.267069 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.267191 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccf1381d-7243-48f7-b3ea-682c89619ea7-logs\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.267409 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-config-data\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.286784 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.286861 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.369327 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccf1381d-7243-48f7-b3ea-682c89619ea7-logs\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.369401 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-config-data\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.369512 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrm7x\" (UniqueName: \"kubernetes.io/projected/ccf1381d-7243-48f7-b3ea-682c89619ea7-kube-api-access-zrm7x\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.369556 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.369643 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.369910 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccf1381d-7243-48f7-b3ea-682c89619ea7-logs\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.374984 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.377707 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.377969 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-config-data\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.389881 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrm7x\" (UniqueName: \"kubernetes.io/projected/ccf1381d-7243-48f7-b3ea-682c89619ea7-kube-api-access-zrm7x\") pod \"nova-metadata-0\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.505410 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.622988 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.623577 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.679233 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.702340 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.789880 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-hxz5w"] Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.790100 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerName="dnsmasq-dns" containerID="cri-o://f5a10e9b3f6da32803b771254b8146881ed1ff5a3313b0df3db2a7422d1e64f0" gracePeriod=10 Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.853561 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d857bc68-73db-45e7-a4ee-e8456edae6f2" path="/var/lib/kubelet/pods/d857bc68-73db-45e7-a4ee-e8456edae6f2/volumes" Nov 28 21:13:12 crc kubenswrapper[4957]: I1128 21:13:12.865033 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.026758 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.215202 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerStarted","Data":"bad57106a055b6ad070a352843c0372b663ebfb9d1e3788415aad7db7b1bf72e"} Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.235638 4957 generic.go:334] "Generic (PLEG): container finished" podID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerID="f5a10e9b3f6da32803b771254b8146881ed1ff5a3313b0df3db2a7422d1e64f0" exitCode=0 Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.235976 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" event={"ID":"a57e309f-cfbf-47ac-8f73-9276f89ce36b","Type":"ContainerDied","Data":"f5a10e9b3f6da32803b771254b8146881ed1ff5a3313b0df3db2a7422d1e64f0"} Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.239025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccf1381d-7243-48f7-b3ea-682c89619ea7","Type":"ContainerStarted","Data":"12bde0d5b890e96c7da212e8b1e846cebf36c4b700bb6b76288a50f05eccc670"} Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.308444 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.370588 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.370984 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 21:13:13 crc kubenswrapper[4957]: I1128 21:13:13.991972 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.142664 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-7gskj"] Nov 28 21:13:14 crc kubenswrapper[4957]: E1128 21:13:14.143608 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerName="dnsmasq-dns" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.143624 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerName="dnsmasq-dns" Nov 28 21:13:14 crc kubenswrapper[4957]: E1128 21:13:14.143650 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerName="init" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.143656 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerName="init" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.143937 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" containerName="dnsmasq-dns" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.144850 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.154513 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-sb\") pod \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.154603 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-swift-storage-0\") pod \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.154684 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-nb\") pod \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.154739 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkf84\" (UniqueName: \"kubernetes.io/projected/a57e309f-cfbf-47ac-8f73-9276f89ce36b-kube-api-access-pkf84\") pod \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.154840 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-svc\") pod \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.154882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-config\") pod \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\" (UID: \"a57e309f-cfbf-47ac-8f73-9276f89ce36b\") " Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.178072 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a57e309f-cfbf-47ac-8f73-9276f89ce36b-kube-api-access-pkf84" (OuterVolumeSpecName: "kube-api-access-pkf84") pod "a57e309f-cfbf-47ac-8f73-9276f89ce36b" (UID: "a57e309f-cfbf-47ac-8f73-9276f89ce36b"). InnerVolumeSpecName "kube-api-access-pkf84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.228686 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-7gskj"] Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.274876 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a031112-7352-4144-bcfc-72f292273e61-operator-scripts\") pod \"aodh-db-create-7gskj\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.275166 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csccq\" (UniqueName: \"kubernetes.io/projected/9a031112-7352-4144-bcfc-72f292273e61-kube-api-access-csccq\") pod \"aodh-db-create-7gskj\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.275240 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkf84\" (UniqueName: \"kubernetes.io/projected/a57e309f-cfbf-47ac-8f73-9276f89ce36b-kube-api-access-pkf84\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.291194 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" event={"ID":"a57e309f-cfbf-47ac-8f73-9276f89ce36b","Type":"ContainerDied","Data":"a82fefd1eba2882c7dc6d87bb6a6709dd019737b422e39b1824e26a73d96ac83"} Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.292097 4957 scope.go:117] "RemoveContainer" containerID="f5a10e9b3f6da32803b771254b8146881ed1ff5a3313b0df3db2a7422d1e64f0" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.292339 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-hxz5w" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.309094 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a57e309f-cfbf-47ac-8f73-9276f89ce36b" (UID: "a57e309f-cfbf-47ac-8f73-9276f89ce36b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.316889 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccf1381d-7243-48f7-b3ea-682c89619ea7","Type":"ContainerStarted","Data":"72af166dfb8cb4445812aac66ad801e30a10fe9ab4fe9fa43dcda4c28104be9b"} Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.316939 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccf1381d-7243-48f7-b3ea-682c89619ea7","Type":"ContainerStarted","Data":"60910a1164d0ed236a1495fd59f792980fd50d124b5c2b207b2ccfc5f3a4912c"} Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.319799 4957 scope.go:117] "RemoveContainer" containerID="27e94fb683bfec5be446290e299b9961ab3f41f5b8d7fb7dc4b9e8946424b158" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.336420 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerStarted","Data":"8c2ddfac7c7ea320b53a1643ec92fa2a52d1fea31432c0e9848ea844feb8b8be"} Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.351770 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a57e309f-cfbf-47ac-8f73-9276f89ce36b" (UID: "a57e309f-cfbf-47ac-8f73-9276f89ce36b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.363086 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-d86d-account-create-update-d66hv"] Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.365200 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.369371 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.378592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a031112-7352-4144-bcfc-72f292273e61-operator-scripts\") pod \"aodh-db-create-7gskj\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.378803 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csccq\" (UniqueName: \"kubernetes.io/projected/9a031112-7352-4144-bcfc-72f292273e61-kube-api-access-csccq\") pod \"aodh-db-create-7gskj\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.378955 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.378977 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.380229 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a031112-7352-4144-bcfc-72f292273e61-operator-scripts\") pod \"aodh-db-create-7gskj\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.386864 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d86d-account-create-update-d66hv"] Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.391963 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a57e309f-cfbf-47ac-8f73-9276f89ce36b" (UID: "a57e309f-cfbf-47ac-8f73-9276f89ce36b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.394932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csccq\" (UniqueName: \"kubernetes.io/projected/9a031112-7352-4144-bcfc-72f292273e61-kube-api-access-csccq\") pod \"aodh-db-create-7gskj\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.394959 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-config" (OuterVolumeSpecName: "config") pod "a57e309f-cfbf-47ac-8f73-9276f89ce36b" (UID: "a57e309f-cfbf-47ac-8f73-9276f89ce36b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.397504 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.397485137 podStartE2EDuration="2.397485137s" podCreationTimestamp="2025-11-28 21:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:14.337937393 +0000 UTC m=+1433.806585302" watchObservedRunningTime="2025-11-28 21:13:14.397485137 +0000 UTC m=+1433.866133046" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.408744 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a57e309f-cfbf-47ac-8f73-9276f89ce36b" (UID: "a57e309f-cfbf-47ac-8f73-9276f89ce36b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.480160 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.480982 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2zmq\" (UniqueName: \"kubernetes.io/projected/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-kube-api-access-g2zmq\") pod \"aodh-d86d-account-create-update-d66hv\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.481557 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-operator-scripts\") pod \"aodh-d86d-account-create-update-d66hv\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.482137 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.482158 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.482167 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57e309f-cfbf-47ac-8f73-9276f89ce36b-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.584717 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-operator-scripts\") pod \"aodh-d86d-account-create-update-d66hv\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.585240 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2zmq\" (UniqueName: \"kubernetes.io/projected/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-kube-api-access-g2zmq\") pod \"aodh-d86d-account-create-update-d66hv\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.585489 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-operator-scripts\") pod \"aodh-d86d-account-create-update-d66hv\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.639268 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2zmq\" (UniqueName: \"kubernetes.io/projected/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-kube-api-access-g2zmq\") pod \"aodh-d86d-account-create-update-d66hv\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.686027 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.853779 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-hxz5w"] Nov 28 21:13:14 crc kubenswrapper[4957]: I1128 21:13:14.886878 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-hxz5w"] Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.124395 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-7gskj"] Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.350978 4957 generic.go:334] "Generic (PLEG): container finished" podID="06fae7da-8394-458c-ad75-2095913be98f" containerID="eee54a860d64bb8e3407886beababbd736c5b3052b4dbd4efb012b7d7351ee12" exitCode=0 Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.351534 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bfcmv" event={"ID":"06fae7da-8394-458c-ad75-2095913be98f","Type":"ContainerDied","Data":"eee54a860d64bb8e3407886beababbd736c5b3052b4dbd4efb012b7d7351ee12"} Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.372018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerStarted","Data":"f8027bd37c390e4a7fce3b41dba64641cb29e6846eda02b87616d583b8d2414e"} Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.372187 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-central-agent" containerID="cri-o://13815cf7d09601f737e420716013e18a1eadd8b7d391f1d5bf56dc9825525bed" gracePeriod=30 Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.372298 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.372353 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="proxy-httpd" containerID="cri-o://f8027bd37c390e4a7fce3b41dba64641cb29e6846eda02b87616d583b8d2414e" gracePeriod=30 Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.372424 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="sg-core" containerID="cri-o://8c2ddfac7c7ea320b53a1643ec92fa2a52d1fea31432c0e9848ea844feb8b8be" gracePeriod=30 Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.372492 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-notification-agent" containerID="cri-o://bad57106a055b6ad070a352843c0372b663ebfb9d1e3788415aad7db7b1bf72e" gracePeriod=30 Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.381583 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7gskj" event={"ID":"9a031112-7352-4144-bcfc-72f292273e61","Type":"ContainerStarted","Data":"6477722e28c6dfb63a816d90b894eca40475fbd090924c64a6f5bde2393034f4"} Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.411266 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.579812 podStartE2EDuration="5.411248046s" podCreationTimestamp="2025-11-28 21:13:10 +0000 UTC" firstStartedPulling="2025-11-28 21:13:11.145951371 +0000 UTC m=+1430.614599280" lastFinishedPulling="2025-11-28 21:13:14.977387417 +0000 UTC m=+1434.446035326" observedRunningTime="2025-11-28 21:13:15.4097924 +0000 UTC m=+1434.878440309" watchObservedRunningTime="2025-11-28 21:13:15.411248046 +0000 UTC m=+1434.879895945" Nov 28 21:13:15 crc kubenswrapper[4957]: I1128 21:13:15.575407 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d86d-account-create-update-d66hv"] Nov 28 21:13:15 crc kubenswrapper[4957]: W1128 21:13:15.575409 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3cf64aa_c4a0_4c2a_ac57_544eefc29d51.slice/crio-0dbddcd224a4d42e5323b22407c437cea2b0c89082c7fccfe35898533ebb37e6 WatchSource:0}: Error finding container 0dbddcd224a4d42e5323b22407c437cea2b0c89082c7fccfe35898533ebb37e6: Status 404 returned error can't find the container with id 0dbddcd224a4d42e5323b22407c437cea2b0c89082c7fccfe35898533ebb37e6 Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.394156 4957 generic.go:334] "Generic (PLEG): container finished" podID="5f8491fd-e8b6-4c13-af22-ab895bf882a4" containerID="9b0f328702a3d648a51e8cd5c4d03439cffb9d35fbb4809a967495077475f005" exitCode=0 Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.394262 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" event={"ID":"5f8491fd-e8b6-4c13-af22-ab895bf882a4","Type":"ContainerDied","Data":"9b0f328702a3d648a51e8cd5c4d03439cffb9d35fbb4809a967495077475f005"} Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.396308 4957 generic.go:334] "Generic (PLEG): container finished" podID="b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" containerID="202eb05c2f99f5990d7fd61a1dd179c2f8468a1c96937f76da6f4f8d404ff69a" exitCode=0 Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.396368 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d86d-account-create-update-d66hv" event={"ID":"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51","Type":"ContainerDied","Data":"202eb05c2f99f5990d7fd61a1dd179c2f8468a1c96937f76da6f4f8d404ff69a"} Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.396428 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d86d-account-create-update-d66hv" event={"ID":"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51","Type":"ContainerStarted","Data":"0dbddcd224a4d42e5323b22407c437cea2b0c89082c7fccfe35898533ebb37e6"} Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.399330 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerID="8c2ddfac7c7ea320b53a1643ec92fa2a52d1fea31432c0e9848ea844feb8b8be" exitCode=2 Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.399372 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerID="bad57106a055b6ad070a352843c0372b663ebfb9d1e3788415aad7db7b1bf72e" exitCode=0 Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.399414 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerDied","Data":"8c2ddfac7c7ea320b53a1643ec92fa2a52d1fea31432c0e9848ea844feb8b8be"} Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.399459 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerDied","Data":"bad57106a055b6ad070a352843c0372b663ebfb9d1e3788415aad7db7b1bf72e"} Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.400882 4957 generic.go:334] "Generic (PLEG): container finished" podID="9a031112-7352-4144-bcfc-72f292273e61" containerID="88f9289dddbe4c6d4281716dd48b4177cf577077b23497be302994e7906cfa87" exitCode=0 Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.400927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7gskj" event={"ID":"9a031112-7352-4144-bcfc-72f292273e61","Type":"ContainerDied","Data":"88f9289dddbe4c6d4281716dd48b4177cf577077b23497be302994e7906cfa87"} Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.825334 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a57e309f-cfbf-47ac-8f73-9276f89ce36b" path="/var/lib/kubelet/pods/a57e309f-cfbf-47ac-8f73-9276f89ce36b/volumes" Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.832523 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.950905 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-scripts\") pod \"06fae7da-8394-458c-ad75-2095913be98f\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.951095 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-config-data\") pod \"06fae7da-8394-458c-ad75-2095913be98f\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.951188 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-combined-ca-bundle\") pod \"06fae7da-8394-458c-ad75-2095913be98f\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.951289 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsp4j\" (UniqueName: \"kubernetes.io/projected/06fae7da-8394-458c-ad75-2095913be98f-kube-api-access-hsp4j\") pod \"06fae7da-8394-458c-ad75-2095913be98f\" (UID: \"06fae7da-8394-458c-ad75-2095913be98f\") " Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.956902 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-scripts" (OuterVolumeSpecName: "scripts") pod "06fae7da-8394-458c-ad75-2095913be98f" (UID: "06fae7da-8394-458c-ad75-2095913be98f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.959891 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06fae7da-8394-458c-ad75-2095913be98f-kube-api-access-hsp4j" (OuterVolumeSpecName: "kube-api-access-hsp4j") pod "06fae7da-8394-458c-ad75-2095913be98f" (UID: "06fae7da-8394-458c-ad75-2095913be98f"). InnerVolumeSpecName "kube-api-access-hsp4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.985487 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06fae7da-8394-458c-ad75-2095913be98f" (UID: "06fae7da-8394-458c-ad75-2095913be98f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:16 crc kubenswrapper[4957]: I1128 21:13:16.991288 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-config-data" (OuterVolumeSpecName: "config-data") pod "06fae7da-8394-458c-ad75-2095913be98f" (UID: "06fae7da-8394-458c-ad75-2095913be98f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.053800 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsp4j\" (UniqueName: \"kubernetes.io/projected/06fae7da-8394-458c-ad75-2095913be98f-kube-api-access-hsp4j\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.053830 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.053840 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.053850 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06fae7da-8394-458c-ad75-2095913be98f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.412937 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bfcmv" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.414651 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bfcmv" event={"ID":"06fae7da-8394-458c-ad75-2095913be98f","Type":"ContainerDied","Data":"8ee8a95ef4d8afe3ec8f871c6604cc958c859f876dda8ad24030dc0c127fd71d"} Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.414694 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee8a95ef4d8afe3ec8f871c6604cc958c859f876dda8ad24030dc0c127fd71d" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.506500 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.506818 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.576255 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.576508 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-log" containerID="cri-o://4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245" gracePeriod=30 Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.576990 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-api" containerID="cri-o://708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438" gracePeriod=30 Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.610960 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.611180 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" containerName="nova-scheduler-scheduler" containerID="cri-o://100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" gracePeriod=30 Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.622295 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:17 crc kubenswrapper[4957]: E1128 21:13:17.625176 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 21:13:17 crc kubenswrapper[4957]: E1128 21:13:17.627154 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 21:13:17 crc kubenswrapper[4957]: E1128 21:13:17.631348 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 21:13:17 crc kubenswrapper[4957]: E1128 21:13:17.631388 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" containerName="nova-scheduler-scheduler" Nov 28 21:13:17 crc kubenswrapper[4957]: I1128 21:13:17.969959 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.081287 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdfqb\" (UniqueName: \"kubernetes.io/projected/5f8491fd-e8b6-4c13-af22-ab895bf882a4-kube-api-access-pdfqb\") pod \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.081546 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-config-data\") pod \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.081601 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-combined-ca-bundle\") pod \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.081643 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-scripts\") pod \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\" (UID: \"5f8491fd-e8b6-4c13-af22-ab895bf882a4\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.087423 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-scripts" (OuterVolumeSpecName: "scripts") pod "5f8491fd-e8b6-4c13-af22-ab895bf882a4" (UID: "5f8491fd-e8b6-4c13-af22-ab895bf882a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.100429 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8491fd-e8b6-4c13-af22-ab895bf882a4-kube-api-access-pdfqb" (OuterVolumeSpecName: "kube-api-access-pdfqb") pod "5f8491fd-e8b6-4c13-af22-ab895bf882a4" (UID: "5f8491fd-e8b6-4c13-af22-ab895bf882a4"). InnerVolumeSpecName "kube-api-access-pdfqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.118737 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-config-data" (OuterVolumeSpecName: "config-data") pod "5f8491fd-e8b6-4c13-af22-ab895bf882a4" (UID: "5f8491fd-e8b6-4c13-af22-ab895bf882a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.131950 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f8491fd-e8b6-4c13-af22-ab895bf882a4" (UID: "5f8491fd-e8b6-4c13-af22-ab895bf882a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.177816 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.183898 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.183923 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.183932 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8491fd-e8b6-4c13-af22-ab895bf882a4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.183941 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdfqb\" (UniqueName: \"kubernetes.io/projected/5f8491fd-e8b6-4c13-af22-ab895bf882a4-kube-api-access-pdfqb\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.184975 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.285426 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-operator-scripts\") pod \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.285707 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2zmq\" (UniqueName: \"kubernetes.io/projected/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-kube-api-access-g2zmq\") pod \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\" (UID: \"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.285774 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a031112-7352-4144-bcfc-72f292273e61-operator-scripts\") pod \"9a031112-7352-4144-bcfc-72f292273e61\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.285814 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csccq\" (UniqueName: \"kubernetes.io/projected/9a031112-7352-4144-bcfc-72f292273e61-kube-api-access-csccq\") pod \"9a031112-7352-4144-bcfc-72f292273e61\" (UID: \"9a031112-7352-4144-bcfc-72f292273e61\") " Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.285933 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" (UID: "b3cf64aa-c4a0-4c2a-ac57-544eefc29d51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.286248 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a031112-7352-4144-bcfc-72f292273e61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a031112-7352-4144-bcfc-72f292273e61" (UID: "9a031112-7352-4144-bcfc-72f292273e61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.286546 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.286564 4957 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a031112-7352-4144-bcfc-72f292273e61-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.291506 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a031112-7352-4144-bcfc-72f292273e61-kube-api-access-csccq" (OuterVolumeSpecName: "kube-api-access-csccq") pod "9a031112-7352-4144-bcfc-72f292273e61" (UID: "9a031112-7352-4144-bcfc-72f292273e61"). InnerVolumeSpecName "kube-api-access-csccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.291577 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-kube-api-access-g2zmq" (OuterVolumeSpecName: "kube-api-access-g2zmq") pod "b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" (UID: "b3cf64aa-c4a0-4c2a-ac57-544eefc29d51"). InnerVolumeSpecName "kube-api-access-g2zmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.388754 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2zmq\" (UniqueName: \"kubernetes.io/projected/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51-kube-api-access-g2zmq\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.388797 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csccq\" (UniqueName: \"kubernetes.io/projected/9a031112-7352-4144-bcfc-72f292273e61-kube-api-access-csccq\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.424635 4957 generic.go:334] "Generic (PLEG): container finished" podID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerID="4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245" exitCode=143 Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.424709 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f47a4f90-c533-4d50-81e2-5958a92b501c","Type":"ContainerDied","Data":"4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245"} Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.426381 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7gskj" event={"ID":"9a031112-7352-4144-bcfc-72f292273e61","Type":"ContainerDied","Data":"6477722e28c6dfb63a816d90b894eca40475fbd090924c64a6f5bde2393034f4"} Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.426406 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6477722e28c6dfb63a816d90b894eca40475fbd090924c64a6f5bde2393034f4" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.426567 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7gskj" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.428247 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" event={"ID":"5f8491fd-e8b6-4c13-af22-ab895bf882a4","Type":"ContainerDied","Data":"28f4539cffe170ceba360d32922e8e28a354a9a45d4cb98c384ece011e5a6598"} Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.428280 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf4ml" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.428287 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f4539cffe170ceba360d32922e8e28a354a9a45d4cb98c384ece011e5a6598" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.429924 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d86d-account-create-update-d66hv" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.429924 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d86d-account-create-update-d66hv" event={"ID":"b3cf64aa-c4a0-4c2a-ac57-544eefc29d51","Type":"ContainerDied","Data":"0dbddcd224a4d42e5323b22407c437cea2b0c89082c7fccfe35898533ebb37e6"} Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.429974 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dbddcd224a4d42e5323b22407c437cea2b0c89082c7fccfe35898533ebb37e6" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.430029 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-log" containerID="cri-o://60910a1164d0ed236a1495fd59f792980fd50d124b5c2b207b2ccfc5f3a4912c" gracePeriod=30 Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.430096 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-metadata" containerID="cri-o://72af166dfb8cb4445812aac66ad801e30a10fe9ab4fe9fa43dcda4c28104be9b" gracePeriod=30 Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.589385 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 21:13:18 crc kubenswrapper[4957]: E1128 21:13:18.589937 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06fae7da-8394-458c-ad75-2095913be98f" containerName="nova-manage" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.589950 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="06fae7da-8394-458c-ad75-2095913be98f" containerName="nova-manage" Nov 28 21:13:18 crc kubenswrapper[4957]: E1128 21:13:18.589967 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8491fd-e8b6-4c13-af22-ab895bf882a4" containerName="nova-cell1-conductor-db-sync" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.589974 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8491fd-e8b6-4c13-af22-ab895bf882a4" containerName="nova-cell1-conductor-db-sync" Nov 28 21:13:18 crc kubenswrapper[4957]: E1128 21:13:18.589993 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" containerName="mariadb-account-create-update" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.590000 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" containerName="mariadb-account-create-update" Nov 28 21:13:18 crc kubenswrapper[4957]: E1128 21:13:18.590011 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a031112-7352-4144-bcfc-72f292273e61" containerName="mariadb-database-create" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.590017 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a031112-7352-4144-bcfc-72f292273e61" containerName="mariadb-database-create" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.590269 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a031112-7352-4144-bcfc-72f292273e61" containerName="mariadb-database-create" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.590282 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="06fae7da-8394-458c-ad75-2095913be98f" containerName="nova-manage" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.590296 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" containerName="mariadb-account-create-update" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.590304 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8491fd-e8b6-4c13-af22-ab895bf882a4" containerName="nova-cell1-conductor-db-sync" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.591171 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.594678 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.629565 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.697162 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ctb\" (UniqueName: \"kubernetes.io/projected/33612176-0997-4d00-a797-b5997f0d00c3-kube-api-access-c6ctb\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.697247 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33612176-0997-4d00-a797-b5997f0d00c3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.697323 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33612176-0997-4d00-a797-b5997f0d00c3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.799777 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33612176-0997-4d00-a797-b5997f0d00c3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.800133 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33612176-0997-4d00-a797-b5997f0d00c3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.800265 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ctb\" (UniqueName: \"kubernetes.io/projected/33612176-0997-4d00-a797-b5997f0d00c3-kube-api-access-c6ctb\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.803951 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33612176-0997-4d00-a797-b5997f0d00c3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.805809 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33612176-0997-4d00-a797-b5997f0d00c3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.816316 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ctb\" (UniqueName: \"kubernetes.io/projected/33612176-0997-4d00-a797-b5997f0d00c3-kube-api-access-c6ctb\") pod \"nova-cell1-conductor-0\" (UID: \"33612176-0997-4d00-a797-b5997f0d00c3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:18 crc kubenswrapper[4957]: I1128 21:13:18.934289 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.454391 4957 generic.go:334] "Generic (PLEG): container finished" podID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerID="72af166dfb8cb4445812aac66ad801e30a10fe9ab4fe9fa43dcda4c28104be9b" exitCode=0 Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.454824 4957 generic.go:334] "Generic (PLEG): container finished" podID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerID="60910a1164d0ed236a1495fd59f792980fd50d124b5c2b207b2ccfc5f3a4912c" exitCode=143 Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.454849 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccf1381d-7243-48f7-b3ea-682c89619ea7","Type":"ContainerDied","Data":"72af166dfb8cb4445812aac66ad801e30a10fe9ab4fe9fa43dcda4c28104be9b"} Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.454906 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccf1381d-7243-48f7-b3ea-682c89619ea7","Type":"ContainerDied","Data":"60910a1164d0ed236a1495fd59f792980fd50d124b5c2b207b2ccfc5f3a4912c"} Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.486051 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.556612 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.627874 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-config-data\") pod \"ccf1381d-7243-48f7-b3ea-682c89619ea7\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.627999 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-combined-ca-bundle\") pod \"ccf1381d-7243-48f7-b3ea-682c89619ea7\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.628054 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-nova-metadata-tls-certs\") pod \"ccf1381d-7243-48f7-b3ea-682c89619ea7\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.628270 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccf1381d-7243-48f7-b3ea-682c89619ea7-logs\") pod \"ccf1381d-7243-48f7-b3ea-682c89619ea7\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.628343 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrm7x\" (UniqueName: \"kubernetes.io/projected/ccf1381d-7243-48f7-b3ea-682c89619ea7-kube-api-access-zrm7x\") pod \"ccf1381d-7243-48f7-b3ea-682c89619ea7\" (UID: \"ccf1381d-7243-48f7-b3ea-682c89619ea7\") " Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.628907 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccf1381d-7243-48f7-b3ea-682c89619ea7-logs" (OuterVolumeSpecName: "logs") pod "ccf1381d-7243-48f7-b3ea-682c89619ea7" (UID: "ccf1381d-7243-48f7-b3ea-682c89619ea7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.629538 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccf1381d-7243-48f7-b3ea-682c89619ea7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.633058 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf1381d-7243-48f7-b3ea-682c89619ea7-kube-api-access-zrm7x" (OuterVolumeSpecName: "kube-api-access-zrm7x") pod "ccf1381d-7243-48f7-b3ea-682c89619ea7" (UID: "ccf1381d-7243-48f7-b3ea-682c89619ea7"). InnerVolumeSpecName "kube-api-access-zrm7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.660726 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-config-data" (OuterVolumeSpecName: "config-data") pod "ccf1381d-7243-48f7-b3ea-682c89619ea7" (UID: "ccf1381d-7243-48f7-b3ea-682c89619ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.676227 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccf1381d-7243-48f7-b3ea-682c89619ea7" (UID: "ccf1381d-7243-48f7-b3ea-682c89619ea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.696873 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ccf1381d-7243-48f7-b3ea-682c89619ea7" (UID: "ccf1381d-7243-48f7-b3ea-682c89619ea7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.731962 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.731993 4957 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.732002 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrm7x\" (UniqueName: \"kubernetes.io/projected/ccf1381d-7243-48f7-b3ea-682c89619ea7-kube-api-access-zrm7x\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:19 crc kubenswrapper[4957]: I1128 21:13:19.732011 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf1381d-7243-48f7-b3ea-682c89619ea7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.477343 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33612176-0997-4d00-a797-b5997f0d00c3","Type":"ContainerStarted","Data":"5a996e0b735caaf519303c52a5f29cc6a6b210ffa0fd0a01c6ba02127ca29183"} Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.477399 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33612176-0997-4d00-a797-b5997f0d00c3","Type":"ContainerStarted","Data":"b505687df6110cc986a86ce84164501749a906a53a4841e96d5bbf40112b16e1"} Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.477443 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.480656 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccf1381d-7243-48f7-b3ea-682c89619ea7","Type":"ContainerDied","Data":"12bde0d5b890e96c7da212e8b1e846cebf36c4b700bb6b76288a50f05eccc670"} Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.480714 4957 scope.go:117] "RemoveContainer" containerID="72af166dfb8cb4445812aac66ad801e30a10fe9ab4fe9fa43dcda4c28104be9b" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.480889 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.500099 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5000734209999997 podStartE2EDuration="2.500073421s" podCreationTimestamp="2025-11-28 21:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:20.496713548 +0000 UTC m=+1439.965361467" watchObservedRunningTime="2025-11-28 21:13:20.500073421 +0000 UTC m=+1439.968721370" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.519151 4957 scope.go:117] "RemoveContainer" containerID="60910a1164d0ed236a1495fd59f792980fd50d124b5c2b207b2ccfc5f3a4912c" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.535871 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.558593 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.584124 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:20 crc kubenswrapper[4957]: E1128 21:13:20.584913 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-metadata" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.585086 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-metadata" Nov 28 21:13:20 crc kubenswrapper[4957]: E1128 21:13:20.585187 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-log" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.585290 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-log" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.585576 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-log" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.585669 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" containerName="nova-metadata-metadata" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.587098 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.599004 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.600293 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.610472 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.655168 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.655288 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab395a88-867c-404e-9284-4d8dc3d78a41-logs\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.655329 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-config-data\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.655438 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.655568 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whlzp\" (UniqueName: \"kubernetes.io/projected/ab395a88-867c-404e-9284-4d8dc3d78a41-kube-api-access-whlzp\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.757875 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whlzp\" (UniqueName: \"kubernetes.io/projected/ab395a88-867c-404e-9284-4d8dc3d78a41-kube-api-access-whlzp\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.757946 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.757981 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab395a88-867c-404e-9284-4d8dc3d78a41-logs\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.758010 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-config-data\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.758095 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.758599 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab395a88-867c-404e-9284-4d8dc3d78a41-logs\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.761607 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.761680 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.762771 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.774160 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whlzp\" (UniqueName: \"kubernetes.io/projected/ab395a88-867c-404e-9284-4d8dc3d78a41-kube-api-access-whlzp\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.776744 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-config-data\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.777547 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " pod="openstack/nova-metadata-0" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.832469 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf1381d-7243-48f7-b3ea-682c89619ea7" path="/var/lib/kubelet/pods/ccf1381d-7243-48f7-b3ea-682c89619ea7/volumes" Nov 28 21:13:20 crc kubenswrapper[4957]: I1128 21:13:20.923850 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.323292 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.372403 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-combined-ca-bundle\") pod \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.372620 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-config-data\") pod \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.372661 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfr7h\" (UniqueName: \"kubernetes.io/projected/afa8fb39-dda9-40a7-b1c2-aa2d34263620-kube-api-access-wfr7h\") pod \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\" (UID: \"afa8fb39-dda9-40a7-b1c2-aa2d34263620\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.386429 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa8fb39-dda9-40a7-b1c2-aa2d34263620-kube-api-access-wfr7h" (OuterVolumeSpecName: "kube-api-access-wfr7h") pod "afa8fb39-dda9-40a7-b1c2-aa2d34263620" (UID: "afa8fb39-dda9-40a7-b1c2-aa2d34263620"). InnerVolumeSpecName "kube-api-access-wfr7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.410335 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afa8fb39-dda9-40a7-b1c2-aa2d34263620" (UID: "afa8fb39-dda9-40a7-b1c2-aa2d34263620"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.429535 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.437267 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-config-data" (OuterVolumeSpecName: "config-data") pod "afa8fb39-dda9-40a7-b1c2-aa2d34263620" (UID: "afa8fb39-dda9-40a7-b1c2-aa2d34263620"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.474501 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqlr\" (UniqueName: \"kubernetes.io/projected/f47a4f90-c533-4d50-81e2-5958a92b501c-kube-api-access-4qqlr\") pod \"f47a4f90-c533-4d50-81e2-5958a92b501c\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.474630 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f47a4f90-c533-4d50-81e2-5958a92b501c-logs\") pod \"f47a4f90-c533-4d50-81e2-5958a92b501c\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.474743 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-config-data\") pod \"f47a4f90-c533-4d50-81e2-5958a92b501c\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.475039 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-combined-ca-bundle\") pod \"f47a4f90-c533-4d50-81e2-5958a92b501c\" (UID: \"f47a4f90-c533-4d50-81e2-5958a92b501c\") " Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.475044 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f47a4f90-c533-4d50-81e2-5958a92b501c-logs" (OuterVolumeSpecName: "logs") pod "f47a4f90-c533-4d50-81e2-5958a92b501c" (UID: "f47a4f90-c533-4d50-81e2-5958a92b501c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.475782 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f47a4f90-c533-4d50-81e2-5958a92b501c-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.475806 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.475820 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfr7h\" (UniqueName: \"kubernetes.io/projected/afa8fb39-dda9-40a7-b1c2-aa2d34263620-kube-api-access-wfr7h\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.475837 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa8fb39-dda9-40a7-b1c2-aa2d34263620-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.479405 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47a4f90-c533-4d50-81e2-5958a92b501c-kube-api-access-4qqlr" (OuterVolumeSpecName: "kube-api-access-4qqlr") pod "f47a4f90-c533-4d50-81e2-5958a92b501c" (UID: "f47a4f90-c533-4d50-81e2-5958a92b501c"). InnerVolumeSpecName "kube-api-access-4qqlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.497536 4957 generic.go:334] "Generic (PLEG): container finished" podID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerID="708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438" exitCode=0 Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.497833 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f47a4f90-c533-4d50-81e2-5958a92b501c","Type":"ContainerDied","Data":"708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438"} Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.497919 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f47a4f90-c533-4d50-81e2-5958a92b501c","Type":"ContainerDied","Data":"a43be73fc323209e9c656fb84aafa38e27f6a5b8cca5fd73def7fb01b6250556"} Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.497972 4957 scope.go:117] "RemoveContainer" containerID="708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.497953 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.518492 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f47a4f90-c533-4d50-81e2-5958a92b501c" (UID: "f47a4f90-c533-4d50-81e2-5958a92b501c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.524656 4957 generic.go:334] "Generic (PLEG): container finished" podID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" exitCode=0 Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.525840 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.527520 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afa8fb39-dda9-40a7-b1c2-aa2d34263620","Type":"ContainerDied","Data":"100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629"} Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.527563 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afa8fb39-dda9-40a7-b1c2-aa2d34263620","Type":"ContainerDied","Data":"d2a935b76542f0b7c757ad163c41385237bdffa3b5238d21c3b26dcaa32a552e"} Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.535790 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-config-data" (OuterVolumeSpecName: "config-data") pod "f47a4f90-c533-4d50-81e2-5958a92b501c" (UID: "f47a4f90-c533-4d50-81e2-5958a92b501c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.555623 4957 scope.go:117] "RemoveContainer" containerID="4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.557153 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.579869 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.579900 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqlr\" (UniqueName: \"kubernetes.io/projected/f47a4f90-c533-4d50-81e2-5958a92b501c-kube-api-access-4qqlr\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.579912 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f47a4f90-c533-4d50-81e2-5958a92b501c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.589766 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.612867 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.623475 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: E1128 21:13:21.624186 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-log" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.624224 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-log" Nov 28 21:13:21 crc kubenswrapper[4957]: E1128 21:13:21.624244 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-api" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.624254 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-api" Nov 28 21:13:21 crc kubenswrapper[4957]: E1128 21:13:21.624266 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" containerName="nova-scheduler-scheduler" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.624275 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" containerName="nova-scheduler-scheduler" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.624562 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" containerName="nova-scheduler-scheduler" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.624591 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-api" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.624621 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" containerName="nova-api-log" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.625723 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.629701 4957 scope.go:117] "RemoveContainer" containerID="708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.629866 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 21:13:21 crc kubenswrapper[4957]: E1128 21:13:21.631314 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438\": container with ID starting with 708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438 not found: ID does not exist" containerID="708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.631493 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438"} err="failed to get container status \"708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438\": rpc error: code = NotFound desc = could not find container \"708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438\": container with ID starting with 708751998abd79d48d746a197ecbb4609f59731ab619a9083e46bdbb5739d438 not found: ID does not exist" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.631522 4957 scope.go:117] "RemoveContainer" containerID="4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245" Nov 28 21:13:21 crc kubenswrapper[4957]: E1128 21:13:21.635358 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245\": container with ID starting with 4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245 not found: ID does not exist" containerID="4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.635397 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245"} err="failed to get container status \"4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245\": rpc error: code = NotFound desc = could not find container \"4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245\": container with ID starting with 4dcfdf63d501870b838d64a2b81139378724cf684c067ab1045024fa2c822245 not found: ID does not exist" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.635489 4957 scope.go:117] "RemoveContainer" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.636125 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.682419 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4jp\" (UniqueName: \"kubernetes.io/projected/59df07c4-3c97-44c0-b83f-bd70e39ba203-kube-api-access-7x4jp\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.682592 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.682674 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-config-data\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.690522 4957 scope.go:117] "RemoveContainer" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" Nov 28 21:13:21 crc kubenswrapper[4957]: E1128 21:13:21.690932 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629\": container with ID starting with 100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629 not found: ID does not exist" containerID="100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.691017 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629"} err="failed to get container status \"100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629\": rpc error: code = NotFound desc = could not find container \"100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629\": container with ID starting with 100e0f06bd2193035895342cd9d633972265223faf49d9ea6a4d1fda7f166629 not found: ID does not exist" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.784615 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4jp\" (UniqueName: \"kubernetes.io/projected/59df07c4-3c97-44c0-b83f-bd70e39ba203-kube-api-access-7x4jp\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.785134 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.785280 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-config-data\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.789106 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.793291 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-config-data\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.810442 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4jp\" (UniqueName: \"kubernetes.io/projected/59df07c4-3c97-44c0-b83f-bd70e39ba203-kube-api-access-7x4jp\") pod \"nova-scheduler-0\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.877127 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.890790 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.905333 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.907829 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.911737 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.921592 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.972174 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.992050 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ba8b660-8674-4932-b2df-226b0ed63933-logs\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.992179 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-config-data\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.992238 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:21 crc kubenswrapper[4957]: I1128 21:13:21.992280 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrl4v\" (UniqueName: \"kubernetes.io/projected/9ba8b660-8674-4932-b2df-226b0ed63933-kube-api-access-jrl4v\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.102376 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ba8b660-8674-4932-b2df-226b0ed63933-logs\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.102808 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-config-data\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.102849 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.102883 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrl4v\" (UniqueName: \"kubernetes.io/projected/9ba8b660-8674-4932-b2df-226b0ed63933-kube-api-access-jrl4v\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.103101 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ba8b660-8674-4932-b2df-226b0ed63933-logs\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.112199 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-config-data\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.112293 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.127659 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrl4v\" (UniqueName: \"kubernetes.io/projected/9ba8b660-8674-4932-b2df-226b0ed63933-kube-api-access-jrl4v\") pod \"nova-api-0\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.240363 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.542827 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.556967 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerID="13815cf7d09601f737e420716013e18a1eadd8b7d391f1d5bf56dc9825525bed" exitCode=0 Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.557562 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerDied","Data":"13815cf7d09601f737e420716013e18a1eadd8b7d391f1d5bf56dc9825525bed"} Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.560434 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab395a88-867c-404e-9284-4d8dc3d78a41","Type":"ContainerStarted","Data":"3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11"} Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.560487 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab395a88-867c-404e-9284-4d8dc3d78a41","Type":"ContainerStarted","Data":"3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db"} Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.560501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab395a88-867c-404e-9284-4d8dc3d78a41","Type":"ContainerStarted","Data":"2b092a093002437fe91f38343e99b5a4755bfbf60192173b9c1dbbddde049b8d"} Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.583026 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.58300698 podStartE2EDuration="2.58300698s" podCreationTimestamp="2025-11-28 21:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:22.575325181 +0000 UTC m=+1442.043973090" watchObservedRunningTime="2025-11-28 21:13:22.58300698 +0000 UTC m=+1442.051654889" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.832690 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa8fb39-dda9-40a7-b1c2-aa2d34263620" path="/var/lib/kubelet/pods/afa8fb39-dda9-40a7-b1c2-aa2d34263620/volumes" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.833913 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47a4f90-c533-4d50-81e2-5958a92b501c" path="/var/lib/kubelet/pods/f47a4f90-c533-4d50-81e2-5958a92b501c/volumes" Nov 28 21:13:22 crc kubenswrapper[4957]: I1128 21:13:22.905585 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:22 crc kubenswrapper[4957]: W1128 21:13:22.918460 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ba8b660_8674_4932_b2df_226b0ed63933.slice/crio-126adb9207ce2cca5fe9a3d3191fdf79b4e75263e03b3cbc31e4c5294b2c36eb WatchSource:0}: Error finding container 126adb9207ce2cca5fe9a3d3191fdf79b4e75263e03b3cbc31e4c5294b2c36eb: Status 404 returned error can't find the container with id 126adb9207ce2cca5fe9a3d3191fdf79b4e75263e03b3cbc31e4c5294b2c36eb Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.571466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59df07c4-3c97-44c0-b83f-bd70e39ba203","Type":"ContainerStarted","Data":"f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9"} Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.571735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59df07c4-3c97-44c0-b83f-bd70e39ba203","Type":"ContainerStarted","Data":"9da17ff814dae48c9343fa5fa4142a8d6347234a9e39adfd50d64d0d116c28e1"} Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.575118 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ba8b660-8674-4932-b2df-226b0ed63933","Type":"ContainerStarted","Data":"e14fa839bddadc04cc48e4c1bbf3ec391965a760754c968bc133991f67dea5d0"} Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.575146 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ba8b660-8674-4932-b2df-226b0ed63933","Type":"ContainerStarted","Data":"843bc0f6bbe63f1ea284aa4172300cab660953744ee52d867acf1157477d2363"} Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.575156 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ba8b660-8674-4932-b2df-226b0ed63933","Type":"ContainerStarted","Data":"126adb9207ce2cca5fe9a3d3191fdf79b4e75263e03b3cbc31e4c5294b2c36eb"} Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.596266 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.596247356 podStartE2EDuration="2.596247356s" podCreationTimestamp="2025-11-28 21:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:23.588522616 +0000 UTC m=+1443.057170525" watchObservedRunningTime="2025-11-28 21:13:23.596247356 +0000 UTC m=+1443.064895265" Nov 28 21:13:23 crc kubenswrapper[4957]: I1128 21:13:23.607548 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.607530793 podStartE2EDuration="2.607530793s" podCreationTimestamp="2025-11-28 21:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:23.604911829 +0000 UTC m=+1443.073559738" watchObservedRunningTime="2025-11-28 21:13:23.607530793 +0000 UTC m=+1443.076178702" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.431091 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-nxvvm"] Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.432904 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.438478 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-72swb" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.438506 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.438691 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.438895 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.457233 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nxvvm"] Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.468951 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-config-data\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.469033 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfp8j\" (UniqueName: \"kubernetes.io/projected/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-kube-api-access-rfp8j\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.469220 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-scripts\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.469285 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-combined-ca-bundle\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.571027 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-config-data\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.571106 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfp8j\" (UniqueName: \"kubernetes.io/projected/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-kube-api-access-rfp8j\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.571259 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-scripts\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.571316 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-combined-ca-bundle\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.576709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-config-data\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.582675 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-scripts\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.585307 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-combined-ca-bundle\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.596659 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfp8j\" (UniqueName: \"kubernetes.io/projected/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-kube-api-access-rfp8j\") pod \"aodh-db-sync-nxvvm\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:24 crc kubenswrapper[4957]: I1128 21:13:24.764451 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:25 crc kubenswrapper[4957]: I1128 21:13:25.394880 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nxvvm"] Nov 28 21:13:25 crc kubenswrapper[4957]: I1128 21:13:25.604372 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nxvvm" event={"ID":"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4","Type":"ContainerStarted","Data":"bdf5d8f7ce34c163d5556977dfdf0ee515f88efc32d888a158d2c799c00ad1bd"} Nov 28 21:13:25 crc kubenswrapper[4957]: I1128 21:13:25.924567 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 21:13:25 crc kubenswrapper[4957]: I1128 21:13:25.925609 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 21:13:26 crc kubenswrapper[4957]: I1128 21:13:26.973333 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 21:13:28 crc kubenswrapper[4957]: I1128 21:13:28.976328 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 21:13:30 crc kubenswrapper[4957]: I1128 21:13:30.676373 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nxvvm" event={"ID":"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4","Type":"ContainerStarted","Data":"c951c0455cd250da085184e6da43e31229f49cacf7ffee5198c944b5ad414283"} Nov 28 21:13:30 crc kubenswrapper[4957]: I1128 21:13:30.701251 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-nxvvm" podStartSLOduration=2.136056769 podStartE2EDuration="6.701230278s" podCreationTimestamp="2025-11-28 21:13:24 +0000 UTC" firstStartedPulling="2025-11-28 21:13:25.377929567 +0000 UTC m=+1444.846577476" lastFinishedPulling="2025-11-28 21:13:29.943103076 +0000 UTC m=+1449.411750985" observedRunningTime="2025-11-28 21:13:30.691745095 +0000 UTC m=+1450.160393004" watchObservedRunningTime="2025-11-28 21:13:30.701230278 +0000 UTC m=+1450.169878187" Nov 28 21:13:30 crc kubenswrapper[4957]: I1128 21:13:30.925058 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 21:13:30 crc kubenswrapper[4957]: I1128 21:13:30.925121 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 21:13:31 crc kubenswrapper[4957]: I1128 21:13:31.938486 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 21:13:31 crc kubenswrapper[4957]: I1128 21:13:31.938537 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 21:13:31 crc kubenswrapper[4957]: I1128 21:13:31.972727 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 21:13:32 crc kubenswrapper[4957]: I1128 21:13:32.014785 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 21:13:32 crc kubenswrapper[4957]: I1128 21:13:32.241850 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 21:13:32 crc kubenswrapper[4957]: I1128 21:13:32.241904 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 21:13:32 crc kubenswrapper[4957]: I1128 21:13:32.729935 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 21:13:33 crc kubenswrapper[4957]: I1128 21:13:33.324618 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 21:13:33 crc kubenswrapper[4957]: I1128 21:13:33.324640 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 21:13:33 crc kubenswrapper[4957]: I1128 21:13:33.710357 4957 generic.go:334] "Generic (PLEG): container finished" podID="e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" containerID="c951c0455cd250da085184e6da43e31229f49cacf7ffee5198c944b5ad414283" exitCode=0 Nov 28 21:13:33 crc kubenswrapper[4957]: I1128 21:13:33.710554 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nxvvm" event={"ID":"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4","Type":"ContainerDied","Data":"c951c0455cd250da085184e6da43e31229f49cacf7ffee5198c944b5ad414283"} Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.361769 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.555880 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfp8j\" (UniqueName: \"kubernetes.io/projected/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-kube-api-access-rfp8j\") pod \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.555954 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-scripts\") pod \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.556274 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-combined-ca-bundle\") pod \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.556407 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-config-data\") pod \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\" (UID: \"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4\") " Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.569990 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-kube-api-access-rfp8j" (OuterVolumeSpecName: "kube-api-access-rfp8j") pod "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" (UID: "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4"). InnerVolumeSpecName "kube-api-access-rfp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.573686 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-scripts" (OuterVolumeSpecName: "scripts") pod "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" (UID: "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.585464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" (UID: "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.604343 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-config-data" (OuterVolumeSpecName: "config-data") pod "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" (UID: "e06d4815-a61b-46a8-bb15-6f7e21b2dfd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.663944 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.663978 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.663991 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfp8j\" (UniqueName: \"kubernetes.io/projected/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-kube-api-access-rfp8j\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.664005 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.733101 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nxvvm" event={"ID":"e06d4815-a61b-46a8-bb15-6f7e21b2dfd4","Type":"ContainerDied","Data":"bdf5d8f7ce34c163d5556977dfdf0ee515f88efc32d888a158d2c799c00ad1bd"} Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.733157 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf5d8f7ce34c163d5556977dfdf0ee515f88efc32d888a158d2c799c00ad1bd" Nov 28 21:13:36 crc kubenswrapper[4957]: I1128 21:13:35.733242 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nxvvm" Nov 28 21:13:38 crc kubenswrapper[4957]: I1128 21:13:38.992448 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:13:38 crc kubenswrapper[4957]: I1128 21:13:38.993026 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.307778 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 21:13:39 crc kubenswrapper[4957]: E1128 21:13:39.308790 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" containerName="aodh-db-sync" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.308808 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" containerName="aodh-db-sync" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.311153 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" containerName="aodh-db-sync" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.315868 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.322977 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.323334 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-72swb" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.329886 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.334961 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.466512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-config-data\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.466654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-scripts\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.466701 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9bz\" (UniqueName: \"kubernetes.io/projected/553b04d2-b353-4a99-9c06-970275003669-kube-api-access-9c9bz\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.466789 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-combined-ca-bundle\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.568622 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-scripts\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.568692 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9bz\" (UniqueName: \"kubernetes.io/projected/553b04d2-b353-4a99-9c06-970275003669-kube-api-access-9c9bz\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.568780 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-combined-ca-bundle\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.568866 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-config-data\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.575080 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-combined-ca-bundle\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.580405 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-config-data\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.591056 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-scripts\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.595559 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9bz\" (UniqueName: \"kubernetes.io/projected/553b04d2-b353-4a99-9c06-970275003669-kube-api-access-9c9bz\") pod \"aodh-0\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " pod="openstack/aodh-0" Nov 28 21:13:39 crc kubenswrapper[4957]: I1128 21:13:39.641620 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.361548 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.578177 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.682355 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.798437 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-config-data\") pod \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.798744 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-combined-ca-bundle\") pod \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.798987 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxz8\" (UniqueName: \"kubernetes.io/projected/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-kube-api-access-gjxz8\") pod \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\" (UID: \"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea\") " Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.800022 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerStarted","Data":"ad659547fbc9e9b023a33f5fe045df9a19e1b4cf859b53a8e970f9c9351ea537"} Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.802761 4957 generic.go:334] "Generic (PLEG): container finished" podID="a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" containerID="c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3" exitCode=137 Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.802798 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea","Type":"ContainerDied","Data":"c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3"} Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.802824 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a88fa5a6-ee13-4c30-b71a-f983d4dc38ea","Type":"ContainerDied","Data":"98465e79232aa091c40688a5b1bc08539c648a80beeb36cac5e9105686f0b55e"} Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.802844 4957 scope.go:117] "RemoveContainer" containerID="c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.803029 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.811645 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-kube-api-access-gjxz8" (OuterVolumeSpecName: "kube-api-access-gjxz8") pod "a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" (UID: "a88fa5a6-ee13-4c30-b71a-f983d4dc38ea"). InnerVolumeSpecName "kube-api-access-gjxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.868465 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-config-data" (OuterVolumeSpecName: "config-data") pod "a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" (UID: "a88fa5a6-ee13-4c30-b71a-f983d4dc38ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.880529 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" (UID: "a88fa5a6-ee13-4c30-b71a-f983d4dc38ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.901980 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.902014 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxz8\" (UniqueName: \"kubernetes.io/projected/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-kube-api-access-gjxz8\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:40 crc kubenswrapper[4957]: I1128 21:13:40.902026 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.000733 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.000798 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.008483 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.011989 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.029713 4957 scope.go:117] "RemoveContainer" containerID="c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3" Nov 28 21:13:41 crc kubenswrapper[4957]: E1128 21:13:41.030030 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3\": container with ID starting with c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3 not found: ID does not exist" containerID="c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.030077 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3"} err="failed to get container status \"c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3\": rpc error: code = NotFound desc = could not find container \"c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3\": container with ID starting with c25da4d903d895a4919c6e1a0b0ebe211a85e6c895c69807e90c0bcc77b030b3 not found: ID does not exist" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.182330 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.200532 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.228090 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:41 crc kubenswrapper[4957]: E1128 21:13:41.228787 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.228808 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.229133 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.230113 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.232377 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.232824 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.232961 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.255533 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.417527 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5ft\" (UniqueName: \"kubernetes.io/projected/6bed6daf-51f2-46cc-9512-a24925686b61-kube-api-access-hm5ft\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.417884 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.417933 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.417953 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.418113 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.519989 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.520300 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5ft\" (UniqueName: \"kubernetes.io/projected/6bed6daf-51f2-46cc-9512-a24925686b61-kube-api-access-hm5ft\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.520347 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.520375 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.520394 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.526134 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.526251 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.526852 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.527055 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bed6daf-51f2-46cc-9512-a24925686b61-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.539261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5ft\" (UniqueName: \"kubernetes.io/projected/6bed6daf-51f2-46cc-9512-a24925686b61-kube-api-access-hm5ft\") pod \"nova-cell1-novncproxy-0\" (UID: \"6bed6daf-51f2-46cc-9512-a24925686b61\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.548435 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:41 crc kubenswrapper[4957]: I1128 21:13:41.822558 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerStarted","Data":"b4402c9eee37ad3fda5f7b5254609f0e6fd906d86cc2810b480c30b45880179a"} Nov 28 21:13:42 crc kubenswrapper[4957]: W1128 21:13:42.120420 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bed6daf_51f2_46cc_9512_a24925686b61.slice/crio-dc07d7c96703716b30e22c8f7ea4b3706e1272c430e6aad29924610257a7993d WatchSource:0}: Error finding container dc07d7c96703716b30e22c8f7ea4b3706e1272c430e6aad29924610257a7993d: Status 404 returned error can't find the container with id dc07d7c96703716b30e22c8f7ea4b3706e1272c430e6aad29924610257a7993d Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.122514 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.246882 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.247793 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.258362 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.271653 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.693465 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.831293 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88fa5a6-ee13-4c30-b71a-f983d4dc38ea" path="/var/lib/kubelet/pods/a88fa5a6-ee13-4c30-b71a-f983d4dc38ea/volumes" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.869900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6bed6daf-51f2-46cc-9512-a24925686b61","Type":"ContainerStarted","Data":"69c06f56ac2924d473729dc6ee94a6d365d346db948282b5adb6b124e41dd2d1"} Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.869951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6bed6daf-51f2-46cc-9512-a24925686b61","Type":"ContainerStarted","Data":"dc07d7c96703716b30e22c8f7ea4b3706e1272c430e6aad29924610257a7993d"} Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.871329 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.880329 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 21:13:42 crc kubenswrapper[4957]: I1128 21:13:42.919125 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9191040670000001 podStartE2EDuration="1.919104067s" podCreationTimestamp="2025-11-28 21:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:42.903602455 +0000 UTC m=+1462.372250364" watchObservedRunningTime="2025-11-28 21:13:42.919104067 +0000 UTC m=+1462.387751976" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.121386 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-pstbx"] Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.123367 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.134659 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-pstbx"] Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.273441 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.273513 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.273724 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.274152 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwxp\" (UniqueName: \"kubernetes.io/projected/49c11425-f89c-47b1-bc01-d25c62f2e36e-kube-api-access-6pwxp\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.274226 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.274255 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-config\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.376313 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.376602 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.376760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.377198 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.377507 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.377537 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.378056 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwxp\" (UniqueName: \"kubernetes.io/projected/49c11425-f89c-47b1-bc01-d25c62f2e36e-kube-api-access-6pwxp\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.378454 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.379080 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-config\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.379034 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.379669 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-config\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.404857 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwxp\" (UniqueName: \"kubernetes.io/projected/49c11425-f89c-47b1-bc01-d25c62f2e36e-kube-api-access-6pwxp\") pod \"dnsmasq-dns-6b7bbf7cf9-pstbx\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:43 crc kubenswrapper[4957]: I1128 21:13:43.456783 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:44 crc kubenswrapper[4957]: I1128 21:13:44.127275 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-pstbx"] Nov 28 21:13:44 crc kubenswrapper[4957]: I1128 21:13:44.890916 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerStarted","Data":"89ff5575678594b11d39d0d5027b0c483aca85471fa71eb82a87780eee67ad06"} Nov 28 21:13:44 crc kubenswrapper[4957]: I1128 21:13:44.893134 4957 generic.go:334] "Generic (PLEG): container finished" podID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerID="caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8" exitCode=0 Nov 28 21:13:44 crc kubenswrapper[4957]: I1128 21:13:44.894518 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" event={"ID":"49c11425-f89c-47b1-bc01-d25c62f2e36e","Type":"ContainerDied","Data":"caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8"} Nov 28 21:13:44 crc kubenswrapper[4957]: I1128 21:13:44.894543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" event={"ID":"49c11425-f89c-47b1-bc01-d25c62f2e36e","Type":"ContainerStarted","Data":"1eb97b058360f4e198c2f98e26d15bc2f779c8a5c394ef89bc911496122197b2"} Nov 28 21:13:45 crc kubenswrapper[4957]: W1128 21:13:45.661730 4957 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3d1352_0fa9_4d01_b3ae_ef4c64acbb9c.slice/crio-d2dbbecd8672df274f62ea36a61b7ef3816bcc73aff08e2309ca80a04b3b6b97": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3d1352_0fa9_4d01_b3ae_ef4c64acbb9c.slice/crio-d2dbbecd8672df274f62ea36a61b7ef3816bcc73aff08e2309ca80a04b3b6b97/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3d1352_0fa9_4d01_b3ae_ef4c64acbb9c.slice/crio-d2dbbecd8672df274f62ea36a61b7ef3816bcc73aff08e2309ca80a04b3b6b97/memory.stat: no such device], continuing to push stats Nov 28 21:13:45 crc kubenswrapper[4957]: I1128 21:13:45.910472 4957 generic.go:334] "Generic (PLEG): container finished" podID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerID="f8027bd37c390e4a7fce3b41dba64641cb29e6846eda02b87616d583b8d2414e" exitCode=137 Nov 28 21:13:45 crc kubenswrapper[4957]: I1128 21:13:45.910545 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerDied","Data":"f8027bd37c390e4a7fce3b41dba64641cb29e6846eda02b87616d583b8d2414e"} Nov 28 21:13:45 crc kubenswrapper[4957]: I1128 21:13:45.913383 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" event={"ID":"49c11425-f89c-47b1-bc01-d25c62f2e36e","Type":"ContainerStarted","Data":"dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e"} Nov 28 21:13:45 crc kubenswrapper[4957]: I1128 21:13:45.914903 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.373706 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.408294 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" podStartSLOduration=3.408269885 podStartE2EDuration="3.408269885s" podCreationTimestamp="2025-11-28 21:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:45.940552475 +0000 UTC m=+1465.409200394" watchObservedRunningTime="2025-11-28 21:13:46.408269885 +0000 UTC m=+1465.876917794" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.414729 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.415138 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-log" containerID="cri-o://843bc0f6bbe63f1ea284aa4172300cab660953744ee52d867acf1157477d2363" gracePeriod=30 Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.415366 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-api" containerID="cri-o://e14fa839bddadc04cc48e4c1bbf3ec391965a760754c968bc133991f67dea5d0" gracePeriod=30 Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.463142 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-scripts\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.463337 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-config-data\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.463488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-log-httpd\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.464566 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6c5x\" (UniqueName: \"kubernetes.io/projected/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-kube-api-access-t6c5x\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.464685 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-combined-ca-bundle\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.465378 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-run-httpd\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.465597 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-sg-core-conf-yaml\") pod \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\" (UID: \"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c\") " Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.466694 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.474620 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.490490 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-scripts" (OuterVolumeSpecName: "scripts") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.499768 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-kube-api-access-t6c5x" (OuterVolumeSpecName: "kube-api-access-t6c5x") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "kube-api-access-t6c5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.514939 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.551282 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.568143 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.569344 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.569441 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6c5x\" (UniqueName: \"kubernetes.io/projected/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-kube-api-access-t6c5x\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.569500 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.569556 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.590678 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.614309 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-config-data" (OuterVolumeSpecName: "config-data") pod "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" (UID: "cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.672325 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.673338 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.926037 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c","Type":"ContainerDied","Data":"d2dbbecd8672df274f62ea36a61b7ef3816bcc73aff08e2309ca80a04b3b6b97"} Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.926068 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.926116 4957 scope.go:117] "RemoveContainer" containerID="f8027bd37c390e4a7fce3b41dba64641cb29e6846eda02b87616d583b8d2414e" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.934905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerStarted","Data":"eba61aab852bc348d91e75d244ca07d10746cf59bd178a9feb4a72d44ec5c86a"} Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.937199 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ba8b660-8674-4932-b2df-226b0ed63933" containerID="843bc0f6bbe63f1ea284aa4172300cab660953744ee52d867acf1157477d2363" exitCode=143 Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.937243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ba8b660-8674-4932-b2df-226b0ed63933","Type":"ContainerDied","Data":"843bc0f6bbe63f1ea284aa4172300cab660953744ee52d867acf1157477d2363"} Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.950514 4957 scope.go:117] "RemoveContainer" containerID="8c2ddfac7c7ea320b53a1643ec92fa2a52d1fea31432c0e9848ea844feb8b8be" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.953431 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.965920 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.991081 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:46 crc kubenswrapper[4957]: E1128 21:13:46.991802 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="sg-core" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.991819 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="sg-core" Nov 28 21:13:46 crc kubenswrapper[4957]: E1128 21:13:46.991852 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-notification-agent" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.991860 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-notification-agent" Nov 28 21:13:46 crc kubenswrapper[4957]: E1128 21:13:46.991881 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-central-agent" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.991888 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-central-agent" Nov 28 21:13:46 crc kubenswrapper[4957]: E1128 21:13:46.991908 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="proxy-httpd" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.991915 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="proxy-httpd" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.992201 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="sg-core" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.992250 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-notification-agent" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.992276 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="proxy-httpd" Nov 28 21:13:46 crc kubenswrapper[4957]: I1128 21:13:46.992296 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" containerName="ceilometer-central-agent" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:46.995274 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:46.999972 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.000190 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.007905 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.075899 4957 scope.go:117] "RemoveContainer" containerID="bad57106a055b6ad070a352843c0372b663ebfb9d1e3788415aad7db7b1bf72e" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.081716 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.081865 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-log-httpd\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.082002 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-config-data\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.082129 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.082166 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-scripts\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.082312 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5szp\" (UniqueName: \"kubernetes.io/projected/2e6c4f30-6e82-4780-ad8b-52af42dd9006-kube-api-access-t5szp\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.082372 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-run-httpd\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.119267 4957 scope.go:117] "RemoveContainer" containerID="13815cf7d09601f737e420716013e18a1eadd8b7d391f1d5bf56dc9825525bed" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185163 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-log-httpd\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-config-data\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185466 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-scripts\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185499 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5szp\" (UniqueName: \"kubernetes.io/projected/2e6c4f30-6e82-4780-ad8b-52af42dd9006-kube-api-access-t5szp\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185519 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-run-httpd\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185610 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.185702 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-log-httpd\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.186684 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-run-httpd\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.190309 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.190317 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.194014 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-config-data\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.195890 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-scripts\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.202330 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5szp\" (UniqueName: \"kubernetes.io/projected/2e6c4f30-6e82-4780-ad8b-52af42dd9006-kube-api-access-t5szp\") pod \"ceilometer-0\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.362505 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:47 crc kubenswrapper[4957]: I1128 21:13:47.962648 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.238023 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.854419 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c" path="/var/lib/kubelet/pods/cb3d1352-0fa9-4d01-b3ae-ef4c64acbb9c/volumes" Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.988150 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerStarted","Data":"2377c26828e7f46904cfb1bca4f5a5c893889560498fb8ebee1f78fc18b190cc"} Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.988313 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-api" containerID="cri-o://b4402c9eee37ad3fda5f7b5254609f0e6fd906d86cc2810b480c30b45880179a" gracePeriod=30 Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.988558 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-notifier" containerID="cri-o://eba61aab852bc348d91e75d244ca07d10746cf59bd178a9feb4a72d44ec5c86a" gracePeriod=30 Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.988634 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-evaluator" containerID="cri-o://89ff5575678594b11d39d0d5027b0c483aca85471fa71eb82a87780eee67ad06" gracePeriod=30 Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.989165 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-listener" containerID="cri-o://2377c26828e7f46904cfb1bca4f5a5c893889560498fb8ebee1f78fc18b190cc" gracePeriod=30 Nov 28 21:13:48 crc kubenswrapper[4957]: I1128 21:13:48.991804 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerStarted","Data":"2a8bdcdd32797f46226ad4c86e09d21d44e460b1573a06167a7d0e5213aa8980"} Nov 28 21:13:49 crc kubenswrapper[4957]: I1128 21:13:49.027253 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.667980322 podStartE2EDuration="10.027234517s" podCreationTimestamp="2025-11-28 21:13:39 +0000 UTC" firstStartedPulling="2025-11-28 21:13:40.366784385 +0000 UTC m=+1459.835432294" lastFinishedPulling="2025-11-28 21:13:47.72603858 +0000 UTC m=+1467.194686489" observedRunningTime="2025-11-28 21:13:49.010567017 +0000 UTC m=+1468.479214926" watchObservedRunningTime="2025-11-28 21:13:49.027234517 +0000 UTC m=+1468.495882426" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.008594 4957 generic.go:334] "Generic (PLEG): container finished" podID="553b04d2-b353-4a99-9c06-970275003669" containerID="89ff5575678594b11d39d0d5027b0c483aca85471fa71eb82a87780eee67ad06" exitCode=0 Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.009160 4957 generic.go:334] "Generic (PLEG): container finished" podID="553b04d2-b353-4a99-9c06-970275003669" containerID="b4402c9eee37ad3fda5f7b5254609f0e6fd906d86cc2810b480c30b45880179a" exitCode=0 Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.008862 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerDied","Data":"89ff5575678594b11d39d0d5027b0c483aca85471fa71eb82a87780eee67ad06"} Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.009355 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerDied","Data":"b4402c9eee37ad3fda5f7b5254609f0e6fd906d86cc2810b480c30b45880179a"} Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.012611 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerStarted","Data":"f7995390e2f6cc68731707377122d6d7a8779cfe3ac75978514a78e6a8eff841"} Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.015875 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ba8b660-8674-4932-b2df-226b0ed63933" containerID="e14fa839bddadc04cc48e4c1bbf3ec391965a760754c968bc133991f67dea5d0" exitCode=0 Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.015916 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ba8b660-8674-4932-b2df-226b0ed63933","Type":"ContainerDied","Data":"e14fa839bddadc04cc48e4c1bbf3ec391965a760754c968bc133991f67dea5d0"} Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.116180 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.282021 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-combined-ca-bundle\") pod \"9ba8b660-8674-4932-b2df-226b0ed63933\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.282458 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-config-data\") pod \"9ba8b660-8674-4932-b2df-226b0ed63933\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.282561 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrl4v\" (UniqueName: \"kubernetes.io/projected/9ba8b660-8674-4932-b2df-226b0ed63933-kube-api-access-jrl4v\") pod \"9ba8b660-8674-4932-b2df-226b0ed63933\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.282822 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ba8b660-8674-4932-b2df-226b0ed63933-logs\") pod \"9ba8b660-8674-4932-b2df-226b0ed63933\" (UID: \"9ba8b660-8674-4932-b2df-226b0ed63933\") " Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.283704 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ba8b660-8674-4932-b2df-226b0ed63933-logs" (OuterVolumeSpecName: "logs") pod "9ba8b660-8674-4932-b2df-226b0ed63933" (UID: "9ba8b660-8674-4932-b2df-226b0ed63933"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.320445 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba8b660-8674-4932-b2df-226b0ed63933-kube-api-access-jrl4v" (OuterVolumeSpecName: "kube-api-access-jrl4v") pod "9ba8b660-8674-4932-b2df-226b0ed63933" (UID: "9ba8b660-8674-4932-b2df-226b0ed63933"). InnerVolumeSpecName "kube-api-access-jrl4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.339417 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-config-data" (OuterVolumeSpecName: "config-data") pod "9ba8b660-8674-4932-b2df-226b0ed63933" (UID: "9ba8b660-8674-4932-b2df-226b0ed63933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.388175 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.388228 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrl4v\" (UniqueName: \"kubernetes.io/projected/9ba8b660-8674-4932-b2df-226b0ed63933-kube-api-access-jrl4v\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.388240 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ba8b660-8674-4932-b2df-226b0ed63933-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.416099 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ba8b660-8674-4932-b2df-226b0ed63933" (UID: "9ba8b660-8674-4932-b2df-226b0ed63933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:50 crc kubenswrapper[4957]: I1128 21:13:50.491204 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba8b660-8674-4932-b2df-226b0ed63933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.028449 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerStarted","Data":"a49ab639faaa159c36762cb2f6eafadaba83da4c63eae031d55db2e5f2fa5cac"} Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.028773 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerStarted","Data":"c7aa320fda7fb1fe92cbd5587a746976ccaed6c8dd6c413e2610cd3112c359f5"} Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.031396 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ba8b660-8674-4932-b2df-226b0ed63933","Type":"ContainerDied","Data":"126adb9207ce2cca5fe9a3d3191fdf79b4e75263e03b3cbc31e4c5294b2c36eb"} Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.031454 4957 scope.go:117] "RemoveContainer" containerID="e14fa839bddadc04cc48e4c1bbf3ec391965a760754c968bc133991f67dea5d0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.031480 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.070817 4957 scope.go:117] "RemoveContainer" containerID="843bc0f6bbe63f1ea284aa4172300cab660953744ee52d867acf1157477d2363" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.082888 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.095949 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.117086 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:51 crc kubenswrapper[4957]: E1128 21:13:51.132873 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-api" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.132920 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-api" Nov 28 21:13:51 crc kubenswrapper[4957]: E1128 21:13:51.132961 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-log" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.132970 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-log" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.133334 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-log" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.133378 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" containerName="nova-api-api" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.135053 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.134940 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.142224 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.142584 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.142900 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.306343 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.306674 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d715bb-8202-4a79-8642-85cbc7a38ab9-logs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.306808 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-config-data\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.306979 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-public-tls-certs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.307122 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.307185 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z4rn\" (UniqueName: \"kubernetes.io/projected/b0d715bb-8202-4a79-8642-85cbc7a38ab9-kube-api-access-9z4rn\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.409327 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.409397 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d715bb-8202-4a79-8642-85cbc7a38ab9-logs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.409477 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-config-data\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.409547 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-public-tls-certs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.409625 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.409683 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z4rn\" (UniqueName: \"kubernetes.io/projected/b0d715bb-8202-4a79-8642-85cbc7a38ab9-kube-api-access-9z4rn\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.410888 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d715bb-8202-4a79-8642-85cbc7a38ab9-logs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.415913 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.416106 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-config-data\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.416536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.426126 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-public-tls-certs\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.426652 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z4rn\" (UniqueName: \"kubernetes.io/projected/b0d715bb-8202-4a79-8642-85cbc7a38ab9-kube-api-access-9z4rn\") pod \"nova-api-0\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.469474 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.551170 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.577576 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:51 crc kubenswrapper[4957]: W1128 21:13:51.996274 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d715bb_8202_4a79_8642_85cbc7a38ab9.slice/crio-d99281ebbe77af7084f32d7edaf644769a8fb575af2e2bad96a90e7f99b7dc9a WatchSource:0}: Error finding container d99281ebbe77af7084f32d7edaf644769a8fb575af2e2bad96a90e7f99b7dc9a: Status 404 returned error can't find the container with id d99281ebbe77af7084f32d7edaf644769a8fb575af2e2bad96a90e7f99b7dc9a Nov 28 21:13:51 crc kubenswrapper[4957]: I1128 21:13:51.997173 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.052965 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d715bb-8202-4a79-8642-85cbc7a38ab9","Type":"ContainerStarted","Data":"d99281ebbe77af7084f32d7edaf644769a8fb575af2e2bad96a90e7f99b7dc9a"} Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.074485 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.222599 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nl79f"] Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.224183 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.227811 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.227994 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.234135 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nl79f"] Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.331441 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-config-data\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.331582 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfblf\" (UniqueName: \"kubernetes.io/projected/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-kube-api-access-cfblf\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.331613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-scripts\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.331646 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.434330 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfblf\" (UniqueName: \"kubernetes.io/projected/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-kube-api-access-cfblf\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.434436 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-scripts\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.434506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.441154 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-config-data\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.445920 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.449057 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-scripts\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.453027 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfblf\" (UniqueName: \"kubernetes.io/projected/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-kube-api-access-cfblf\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.456948 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-config-data\") pod \"nova-cell1-cell-mapping-nl79f\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.553975 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:13:52 crc kubenswrapper[4957]: I1128 21:13:52.835309 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba8b660-8674-4932-b2df-226b0ed63933" path="/var/lib/kubelet/pods/9ba8b660-8674-4932-b2df-226b0ed63933/volumes" Nov 28 21:13:53 crc kubenswrapper[4957]: W1128 21:13:53.051352 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d60d5e0_a41b_4fd7_9426_ee0cda73c54e.slice/crio-c4a7e26ffa0e0bbec435f1992d900d39f908be9005960212b3f147059ef19e37 WatchSource:0}: Error finding container c4a7e26ffa0e0bbec435f1992d900d39f908be9005960212b3f147059ef19e37: Status 404 returned error can't find the container with id c4a7e26ffa0e0bbec435f1992d900d39f908be9005960212b3f147059ef19e37 Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.052930 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nl79f"] Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.069688 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d715bb-8202-4a79-8642-85cbc7a38ab9","Type":"ContainerStarted","Data":"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4"} Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.069731 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d715bb-8202-4a79-8642-85cbc7a38ab9","Type":"ContainerStarted","Data":"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe"} Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.073235 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nl79f" event={"ID":"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e","Type":"ContainerStarted","Data":"c4a7e26ffa0e0bbec435f1992d900d39f908be9005960212b3f147059ef19e37"} Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.076363 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-central-agent" containerID="cri-o://f7995390e2f6cc68731707377122d6d7a8779cfe3ac75978514a78e6a8eff841" gracePeriod=30 Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.076645 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="proxy-httpd" containerID="cri-o://9133f4123232a1486bd2aa187df09c1efa442a51b3c08b6a8dfe323cc9da6145" gracePeriod=30 Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.076718 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="sg-core" containerID="cri-o://c7aa320fda7fb1fe92cbd5587a746976ccaed6c8dd6c413e2610cd3112c359f5" gracePeriod=30 Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.076764 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-notification-agent" containerID="cri-o://a49ab639faaa159c36762cb2f6eafadaba83da4c63eae031d55db2e5f2fa5cac" gracePeriod=30 Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.076768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerStarted","Data":"9133f4123232a1486bd2aa187df09c1efa442a51b3c08b6a8dfe323cc9da6145"} Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.076827 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.091230 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.091199871 podStartE2EDuration="2.091199871s" podCreationTimestamp="2025-11-28 21:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:53.085766387 +0000 UTC m=+1472.554414296" watchObservedRunningTime="2025-11-28 21:13:53.091199871 +0000 UTC m=+1472.559847780" Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.125943 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.168274836 podStartE2EDuration="7.125914495s" podCreationTimestamp="2025-11-28 21:13:46 +0000 UTC" firstStartedPulling="2025-11-28 21:13:48.377009468 +0000 UTC m=+1467.845657377" lastFinishedPulling="2025-11-28 21:13:52.334649127 +0000 UTC m=+1471.803297036" observedRunningTime="2025-11-28 21:13:53.113847148 +0000 UTC m=+1472.582495057" watchObservedRunningTime="2025-11-28 21:13:53.125914495 +0000 UTC m=+1472.594562414" Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.458982 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.535604 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sgfk6"] Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.535828 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" podUID="d9611510-ca79-4786-9ae4-de71a9238443" containerName="dnsmasq-dns" containerID="cri-o://c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee" gracePeriod=10 Nov 28 21:13:53 crc kubenswrapper[4957]: I1128 21:13:53.996584 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.087093 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-svc\") pod \"d9611510-ca79-4786-9ae4-de71a9238443\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.089522 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck8pq\" (UniqueName: \"kubernetes.io/projected/d9611510-ca79-4786-9ae4-de71a9238443-kube-api-access-ck8pq\") pod \"d9611510-ca79-4786-9ae4-de71a9238443\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.089590 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-config\") pod \"d9611510-ca79-4786-9ae4-de71a9238443\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.089665 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-sb\") pod \"d9611510-ca79-4786-9ae4-de71a9238443\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.089773 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-nb\") pod \"d9611510-ca79-4786-9ae4-de71a9238443\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.089811 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-swift-storage-0\") pod \"d9611510-ca79-4786-9ae4-de71a9238443\" (UID: \"d9611510-ca79-4786-9ae4-de71a9238443\") " Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.093408 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nl79f" event={"ID":"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e","Type":"ContainerStarted","Data":"74131dc3e6f3eb96b70d0be5c922c96db1d5a7183fc7fbac2810d8ab6354cb3e"} Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.097905 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9611510-ca79-4786-9ae4-de71a9238443-kube-api-access-ck8pq" (OuterVolumeSpecName: "kube-api-access-ck8pq") pod "d9611510-ca79-4786-9ae4-de71a9238443" (UID: "d9611510-ca79-4786-9ae4-de71a9238443"). InnerVolumeSpecName "kube-api-access-ck8pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.100840 4957 generic.go:334] "Generic (PLEG): container finished" podID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerID="9133f4123232a1486bd2aa187df09c1efa442a51b3c08b6a8dfe323cc9da6145" exitCode=0 Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.100904 4957 generic.go:334] "Generic (PLEG): container finished" podID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerID="c7aa320fda7fb1fe92cbd5587a746976ccaed6c8dd6c413e2610cd3112c359f5" exitCode=2 Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.100913 4957 generic.go:334] "Generic (PLEG): container finished" podID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerID="a49ab639faaa159c36762cb2f6eafadaba83da4c63eae031d55db2e5f2fa5cac" exitCode=0 Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.101333 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerDied","Data":"9133f4123232a1486bd2aa187df09c1efa442a51b3c08b6a8dfe323cc9da6145"} Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.101486 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerDied","Data":"c7aa320fda7fb1fe92cbd5587a746976ccaed6c8dd6c413e2610cd3112c359f5"} Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.101585 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerDied","Data":"a49ab639faaa159c36762cb2f6eafadaba83da4c63eae031d55db2e5f2fa5cac"} Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.106764 4957 generic.go:334] "Generic (PLEG): container finished" podID="d9611510-ca79-4786-9ae4-de71a9238443" containerID="c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee" exitCode=0 Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.106874 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.106919 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" event={"ID":"d9611510-ca79-4786-9ae4-de71a9238443","Type":"ContainerDied","Data":"c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee"} Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.106949 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sgfk6" event={"ID":"d9611510-ca79-4786-9ae4-de71a9238443","Type":"ContainerDied","Data":"0550f8d846092e62c77f0ab3d39fc7fb3661f99a7bc7e25c9bfdbeaa39fb9b13"} Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.106965 4957 scope.go:117] "RemoveContainer" containerID="c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.165504 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9611510-ca79-4786-9ae4-de71a9238443" (UID: "d9611510-ca79-4786-9ae4-de71a9238443"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.174767 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9611510-ca79-4786-9ae4-de71a9238443" (UID: "d9611510-ca79-4786-9ae4-de71a9238443"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.187774 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-config" (OuterVolumeSpecName: "config") pod "d9611510-ca79-4786-9ae4-de71a9238443" (UID: "d9611510-ca79-4786-9ae4-de71a9238443"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.192254 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9611510-ca79-4786-9ae4-de71a9238443" (UID: "d9611510-ca79-4786-9ae4-de71a9238443"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.192959 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.192982 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck8pq\" (UniqueName: \"kubernetes.io/projected/d9611510-ca79-4786-9ae4-de71a9238443-kube-api-access-ck8pq\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.192992 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.193001 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.193010 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.213282 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9611510-ca79-4786-9ae4-de71a9238443" (UID: "d9611510-ca79-4786-9ae4-de71a9238443"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.295475 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9611510-ca79-4786-9ae4-de71a9238443-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.298225 4957 scope.go:117] "RemoveContainer" containerID="a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.333171 4957 scope.go:117] "RemoveContainer" containerID="c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee" Nov 28 21:13:54 crc kubenswrapper[4957]: E1128 21:13:54.333662 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee\": container with ID starting with c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee not found: ID does not exist" containerID="c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.333762 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee"} err="failed to get container status \"c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee\": rpc error: code = NotFound desc = could not find container \"c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee\": container with ID starting with c6c7a0aa3a05a471b0b491629792d14200348bf5fb540e719cbe2e63c8ee9cee not found: ID does not exist" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.333854 4957 scope.go:117] "RemoveContainer" containerID="a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873" Nov 28 21:13:54 crc kubenswrapper[4957]: E1128 21:13:54.334328 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873\": container with ID starting with a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873 not found: ID does not exist" containerID="a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.334380 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873"} err="failed to get container status \"a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873\": rpc error: code = NotFound desc = could not find container \"a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873\": container with ID starting with a8d2889fda83c56818264400cd562fddfdb935235d595309791fc924bbdcf873 not found: ID does not exist" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.476931 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nl79f" podStartSLOduration=2.476913415 podStartE2EDuration="2.476913415s" podCreationTimestamp="2025-11-28 21:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:13:54.11680709 +0000 UTC m=+1473.585454999" watchObservedRunningTime="2025-11-28 21:13:54.476913415 +0000 UTC m=+1473.945561324" Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.480873 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sgfk6"] Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.493755 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sgfk6"] Nov 28 21:13:54 crc kubenswrapper[4957]: I1128 21:13:54.825487 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9611510-ca79-4786-9ae4-de71a9238443" path="/var/lib/kubelet/pods/d9611510-ca79-4786-9ae4-de71a9238443/volumes" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.136298 4957 generic.go:334] "Generic (PLEG): container finished" podID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerID="f7995390e2f6cc68731707377122d6d7a8779cfe3ac75978514a78e6a8eff841" exitCode=0 Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.136395 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerDied","Data":"f7995390e2f6cc68731707377122d6d7a8779cfe3ac75978514a78e6a8eff841"} Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.687341 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.748671 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5szp\" (UniqueName: \"kubernetes.io/projected/2e6c4f30-6e82-4780-ad8b-52af42dd9006-kube-api-access-t5szp\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.748912 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-run-httpd\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.748948 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-sg-core-conf-yaml\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749188 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-log-httpd\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749230 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-combined-ca-bundle\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749230 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749343 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-scripts\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749428 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-config-data\") pod \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\" (UID: \"2e6c4f30-6e82-4780-ad8b-52af42dd9006\") " Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749520 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749885 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.749904 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e6c4f30-6e82-4780-ad8b-52af42dd9006-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.771365 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6c4f30-6e82-4780-ad8b-52af42dd9006-kube-api-access-t5szp" (OuterVolumeSpecName: "kube-api-access-t5szp") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "kube-api-access-t5szp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.776355 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-scripts" (OuterVolumeSpecName: "scripts") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.793035 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.851890 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.852084 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5szp\" (UniqueName: \"kubernetes.io/projected/2e6c4f30-6e82-4780-ad8b-52af42dd9006-kube-api-access-t5szp\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.852141 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.880354 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.888121 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-config-data" (OuterVolumeSpecName: "config-data") pod "2e6c4f30-6e82-4780-ad8b-52af42dd9006" (UID: "2e6c4f30-6e82-4780-ad8b-52af42dd9006"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.954338 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:56 crc kubenswrapper[4957]: I1128 21:13:56.954373 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6c4f30-6e82-4780-ad8b-52af42dd9006-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.150340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e6c4f30-6e82-4780-ad8b-52af42dd9006","Type":"ContainerDied","Data":"2a8bdcdd32797f46226ad4c86e09d21d44e460b1573a06167a7d0e5213aa8980"} Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.150401 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.151338 4957 scope.go:117] "RemoveContainer" containerID="9133f4123232a1486bd2aa187df09c1efa442a51b3c08b6a8dfe323cc9da6145" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.189960 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.191256 4957 scope.go:117] "RemoveContainer" containerID="c7aa320fda7fb1fe92cbd5587a746976ccaed6c8dd6c413e2610cd3112c359f5" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.204086 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.223569 4957 scope.go:117] "RemoveContainer" containerID="a49ab639faaa159c36762cb2f6eafadaba83da4c63eae031d55db2e5f2fa5cac" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.235566 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:57 crc kubenswrapper[4957]: E1128 21:13:57.236858 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="sg-core" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.236885 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="sg-core" Nov 28 21:13:57 crc kubenswrapper[4957]: E1128 21:13:57.236937 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-notification-agent" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.236947 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-notification-agent" Nov 28 21:13:57 crc kubenswrapper[4957]: E1128 21:13:57.236996 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9611510-ca79-4786-9ae4-de71a9238443" containerName="dnsmasq-dns" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.237006 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9611510-ca79-4786-9ae4-de71a9238443" containerName="dnsmasq-dns" Nov 28 21:13:57 crc kubenswrapper[4957]: E1128 21:13:57.237017 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="proxy-httpd" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.237024 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="proxy-httpd" Nov 28 21:13:57 crc kubenswrapper[4957]: E1128 21:13:57.237059 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-central-agent" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.237068 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-central-agent" Nov 28 21:13:57 crc kubenswrapper[4957]: E1128 21:13:57.237097 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9611510-ca79-4786-9ae4-de71a9238443" containerName="init" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.237107 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9611510-ca79-4786-9ae4-de71a9238443" containerName="init" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.255974 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="proxy-httpd" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.256048 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9611510-ca79-4786-9ae4-de71a9238443" containerName="dnsmasq-dns" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.256100 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-notification-agent" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.256129 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="ceilometer-central-agent" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.256177 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" containerName="sg-core" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.262248 4957 scope.go:117] "RemoveContainer" containerID="f7995390e2f6cc68731707377122d6d7a8779cfe3ac75978514a78e6a8eff841" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.267351 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.271163 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.273394 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.282383 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.468512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-scripts\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.468697 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.468785 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.468872 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-config-data\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.469010 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbxl\" (UniqueName: \"kubernetes.io/projected/69d6aae3-d098-4f75-8335-f86d900a41ce-kube-api-access-6vbxl\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.469279 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-log-httpd\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.469334 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-run-httpd\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.572385 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-log-httpd\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.572932 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-log-httpd\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.572938 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-run-httpd\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.573129 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-scripts\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.573319 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.573626 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-run-httpd\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.574191 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.574419 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-config-data\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.574586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbxl\" (UniqueName: \"kubernetes.io/projected/69d6aae3-d098-4f75-8335-f86d900a41ce-kube-api-access-6vbxl\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.579008 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-scripts\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.579878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-config-data\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.580510 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.581991 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.595176 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbxl\" (UniqueName: \"kubernetes.io/projected/69d6aae3-d098-4f75-8335-f86d900a41ce-kube-api-access-6vbxl\") pod \"ceilometer-0\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " pod="openstack/ceilometer-0" Nov 28 21:13:57 crc kubenswrapper[4957]: I1128 21:13:57.892421 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:13:58 crc kubenswrapper[4957]: I1128 21:13:58.407704 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:13:58 crc kubenswrapper[4957]: I1128 21:13:58.825792 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6c4f30-6e82-4780-ad8b-52af42dd9006" path="/var/lib/kubelet/pods/2e6c4f30-6e82-4780-ad8b-52af42dd9006/volumes" Nov 28 21:13:59 crc kubenswrapper[4957]: I1128 21:13:59.182349 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" containerID="74131dc3e6f3eb96b70d0be5c922c96db1d5a7183fc7fbac2810d8ab6354cb3e" exitCode=0 Nov 28 21:13:59 crc kubenswrapper[4957]: I1128 21:13:59.182438 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nl79f" event={"ID":"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e","Type":"ContainerDied","Data":"74131dc3e6f3eb96b70d0be5c922c96db1d5a7183fc7fbac2810d8ab6354cb3e"} Nov 28 21:13:59 crc kubenswrapper[4957]: I1128 21:13:59.185485 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerStarted","Data":"ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f"} Nov 28 21:13:59 crc kubenswrapper[4957]: I1128 21:13:59.185519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerStarted","Data":"28c2d16436e31478fad9e2713452f70f20001dfb838e667fa267fd7544cbc9d7"} Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.207276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerStarted","Data":"c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166"} Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.658417 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.842197 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfblf\" (UniqueName: \"kubernetes.io/projected/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-kube-api-access-cfblf\") pod \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.842373 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-scripts\") pod \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.842568 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-combined-ca-bundle\") pod \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.842627 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-config-data\") pod \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\" (UID: \"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e\") " Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.847388 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-scripts" (OuterVolumeSpecName: "scripts") pod "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" (UID: "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.855917 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-kube-api-access-cfblf" (OuterVolumeSpecName: "kube-api-access-cfblf") pod "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" (UID: "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e"). InnerVolumeSpecName "kube-api-access-cfblf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.874862 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-config-data" (OuterVolumeSpecName: "config-data") pod "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" (UID: "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.881943 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" (UID: "8d60d5e0-a41b-4fd7-9426-ee0cda73c54e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.948426 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.948472 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.948499 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfblf\" (UniqueName: \"kubernetes.io/projected/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-kube-api-access-cfblf\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:00 crc kubenswrapper[4957]: I1128 21:14:00.948515 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.232619 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerStarted","Data":"4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303"} Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.234841 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nl79f" event={"ID":"8d60d5e0-a41b-4fd7-9426-ee0cda73c54e","Type":"ContainerDied","Data":"c4a7e26ffa0e0bbec435f1992d900d39f908be9005960212b3f147059ef19e37"} Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.234890 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a7e26ffa0e0bbec435f1992d900d39f908be9005960212b3f147059ef19e37" Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.234959 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nl79f" Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.339911 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.340158 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-log" containerID="cri-o://705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe" gracePeriod=30 Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.340290 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-api" containerID="cri-o://a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4" gracePeriod=30 Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.359925 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.360624 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="59df07c4-3c97-44c0-b83f-bd70e39ba203" containerName="nova-scheduler-scheduler" containerID="cri-o://f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9" gracePeriod=30 Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.376614 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.376859 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-log" containerID="cri-o://3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db" gracePeriod=30 Nov 28 21:14:01 crc kubenswrapper[4957]: I1128 21:14:01.376924 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-metadata" containerID="cri-o://3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11" gracePeriod=30 Nov 28 21:14:01 crc kubenswrapper[4957]: E1128 21:14:01.978877 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 21:14:01 crc kubenswrapper[4957]: E1128 21:14:01.982310 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 21:14:01 crc kubenswrapper[4957]: E1128 21:14:01.986142 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 21:14:01 crc kubenswrapper[4957]: E1128 21:14:01.986191 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="59df07c4-3c97-44c0-b83f-bd70e39ba203" containerName="nova-scheduler-scheduler" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.012089 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.172950 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d715bb-8202-4a79-8642-85cbc7a38ab9-logs\") pod \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.173005 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-combined-ca-bundle\") pod \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.173055 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-internal-tls-certs\") pod \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.173078 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4rn\" (UniqueName: \"kubernetes.io/projected/b0d715bb-8202-4a79-8642-85cbc7a38ab9-kube-api-access-9z4rn\") pod \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.173173 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-public-tls-certs\") pod \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.173224 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-config-data\") pod \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\" (UID: \"b0d715bb-8202-4a79-8642-85cbc7a38ab9\") " Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.174126 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d715bb-8202-4a79-8642-85cbc7a38ab9-logs" (OuterVolumeSpecName: "logs") pod "b0d715bb-8202-4a79-8642-85cbc7a38ab9" (UID: "b0d715bb-8202-4a79-8642-85cbc7a38ab9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.182409 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d715bb-8202-4a79-8642-85cbc7a38ab9-kube-api-access-9z4rn" (OuterVolumeSpecName: "kube-api-access-9z4rn") pod "b0d715bb-8202-4a79-8642-85cbc7a38ab9" (UID: "b0d715bb-8202-4a79-8642-85cbc7a38ab9"). InnerVolumeSpecName "kube-api-access-9z4rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.208359 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0d715bb-8202-4a79-8642-85cbc7a38ab9" (UID: "b0d715bb-8202-4a79-8642-85cbc7a38ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.220135 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-config-data" (OuterVolumeSpecName: "config-data") pod "b0d715bb-8202-4a79-8642-85cbc7a38ab9" (UID: "b0d715bb-8202-4a79-8642-85cbc7a38ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.250514 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerStarted","Data":"dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129"} Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.251353 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.252548 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0d715bb-8202-4a79-8642-85cbc7a38ab9" (UID: "b0d715bb-8202-4a79-8642-85cbc7a38ab9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256297 4957 generic.go:334] "Generic (PLEG): container finished" podID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerID="a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4" exitCode=0 Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256330 4957 generic.go:334] "Generic (PLEG): container finished" podID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerID="705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe" exitCode=143 Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256352 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d715bb-8202-4a79-8642-85cbc7a38ab9","Type":"ContainerDied","Data":"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4"} Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256428 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d715bb-8202-4a79-8642-85cbc7a38ab9","Type":"ContainerDied","Data":"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe"} Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256438 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0d715bb-8202-4a79-8642-85cbc7a38ab9","Type":"ContainerDied","Data":"d99281ebbe77af7084f32d7edaf644769a8fb575af2e2bad96a90e7f99b7dc9a"} Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.256452 4957 scope.go:117] "RemoveContainer" containerID="a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.264521 4957 generic.go:334] "Generic (PLEG): container finished" podID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerID="3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db" exitCode=143 Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.264571 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab395a88-867c-404e-9284-4d8dc3d78a41","Type":"ContainerDied","Data":"3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db"} Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.265876 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b0d715bb-8202-4a79-8642-85cbc7a38ab9" (UID: "b0d715bb-8202-4a79-8642-85cbc7a38ab9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.275629 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.275666 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d715bb-8202-4a79-8642-85cbc7a38ab9-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.275677 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.275686 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.275697 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z4rn\" (UniqueName: \"kubernetes.io/projected/b0d715bb-8202-4a79-8642-85cbc7a38ab9-kube-api-access-9z4rn\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.275706 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d715bb-8202-4a79-8642-85cbc7a38ab9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.290421 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.046883792 podStartE2EDuration="5.29039888s" podCreationTimestamp="2025-11-28 21:13:57 +0000 UTC" firstStartedPulling="2025-11-28 21:13:58.411360264 +0000 UTC m=+1477.880008173" lastFinishedPulling="2025-11-28 21:14:01.654875352 +0000 UTC m=+1481.123523261" observedRunningTime="2025-11-28 21:14:02.276678592 +0000 UTC m=+1481.745326501" watchObservedRunningTime="2025-11-28 21:14:02.29039888 +0000 UTC m=+1481.759046789" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.295968 4957 scope.go:117] "RemoveContainer" containerID="705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.319610 4957 scope.go:117] "RemoveContainer" containerID="a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4" Nov 28 21:14:02 crc kubenswrapper[4957]: E1128 21:14:02.320044 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4\": container with ID starting with a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4 not found: ID does not exist" containerID="a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.320087 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4"} err="failed to get container status \"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4\": rpc error: code = NotFound desc = could not find container \"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4\": container with ID starting with a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4 not found: ID does not exist" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.320111 4957 scope.go:117] "RemoveContainer" containerID="705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe" Nov 28 21:14:02 crc kubenswrapper[4957]: E1128 21:14:02.320640 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe\": container with ID starting with 705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe not found: ID does not exist" containerID="705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.320670 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe"} err="failed to get container status \"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe\": rpc error: code = NotFound desc = could not find container \"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe\": container with ID starting with 705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe not found: ID does not exist" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.320693 4957 scope.go:117] "RemoveContainer" containerID="a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.321053 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4"} err="failed to get container status \"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4\": rpc error: code = NotFound desc = could not find container \"a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4\": container with ID starting with a941e5ced3771d8431706642009ad338dc1eda81eb3abdd4b5a63c058f6ee4c4 not found: ID does not exist" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.321072 4957 scope.go:117] "RemoveContainer" containerID="705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.321452 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe"} err="failed to get container status \"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe\": rpc error: code = NotFound desc = could not find container \"705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe\": container with ID starting with 705fb986b72eebd05b144cdb45da6c26dc230abdf2edc280a7bf3bd6885ad1fe not found: ID does not exist" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.594428 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.607124 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.633134 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 21:14:02 crc kubenswrapper[4957]: E1128 21:14:02.633667 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-api" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.633687 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-api" Nov 28 21:14:02 crc kubenswrapper[4957]: E1128 21:14:02.633741 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" containerName="nova-manage" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.633750 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" containerName="nova-manage" Nov 28 21:14:02 crc kubenswrapper[4957]: E1128 21:14:02.633774 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-log" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.633784 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-log" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.633989 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-api" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.634012 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" containerName="nova-manage" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.634037 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" containerName="nova-api-log" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.635611 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.638929 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.639161 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.639488 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.677671 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.788088 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.788375 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqjn2\" (UniqueName: \"kubernetes.io/projected/1fd99b25-e3b2-439d-874c-6ae3351f9cea-kube-api-access-kqjn2\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.788504 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-config-data\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.788669 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.788822 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd99b25-e3b2-439d-874c-6ae3351f9cea-logs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.788950 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-public-tls-certs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.826369 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d715bb-8202-4a79-8642-85cbc7a38ab9" path="/var/lib/kubelet/pods/b0d715bb-8202-4a79-8642-85cbc7a38ab9/volumes" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.890601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd99b25-e3b2-439d-874c-6ae3351f9cea-logs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.891401 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-public-tls-certs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.891545 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.891642 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqjn2\" (UniqueName: \"kubernetes.io/projected/1fd99b25-e3b2-439d-874c-6ae3351f9cea-kube-api-access-kqjn2\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.891695 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-config-data\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.891752 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.891330 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd99b25-e3b2-439d-874c-6ae3351f9cea-logs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.896854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-config-data\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.899089 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-public-tls-certs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.898527 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.903513 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd99b25-e3b2-439d-874c-6ae3351f9cea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.909712 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqjn2\" (UniqueName: \"kubernetes.io/projected/1fd99b25-e3b2-439d-874c-6ae3351f9cea-kube-api-access-kqjn2\") pod \"nova-api-0\" (UID: \"1fd99b25-e3b2-439d-874c-6ae3351f9cea\") " pod="openstack/nova-api-0" Nov 28 21:14:02 crc kubenswrapper[4957]: I1128 21:14:02.976580 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 21:14:03 crc kubenswrapper[4957]: I1128 21:14:03.515167 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 21:14:03 crc kubenswrapper[4957]: W1128 21:14:03.520850 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd99b25_e3b2_439d_874c_6ae3351f9cea.slice/crio-d9e04fbfd5ef8a091f3c09e080a97e99becf4ef33aee1c2cb9679468a40480ac WatchSource:0}: Error finding container d9e04fbfd5ef8a091f3c09e080a97e99becf4ef33aee1c2cb9679468a40480ac: Status 404 returned error can't find the container with id d9e04fbfd5ef8a091f3c09e080a97e99becf4ef33aee1c2cb9679468a40480ac Nov 28 21:14:04 crc kubenswrapper[4957]: I1128 21:14:04.292629 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd99b25-e3b2-439d-874c-6ae3351f9cea","Type":"ContainerStarted","Data":"6e8e0d94e62a7ea471a9daf5f3587c384b697c6a9922bf15d0d9a8b06c0d1382"} Nov 28 21:14:04 crc kubenswrapper[4957]: I1128 21:14:04.293015 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd99b25-e3b2-439d-874c-6ae3351f9cea","Type":"ContainerStarted","Data":"945335697d9efe4c00ba39599e5f43485fa6a3ce00aa4ea2513e564b43380445"} Nov 28 21:14:04 crc kubenswrapper[4957]: I1128 21:14:04.293028 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd99b25-e3b2-439d-874c-6ae3351f9cea","Type":"ContainerStarted","Data":"d9e04fbfd5ef8a091f3c09e080a97e99becf4ef33aee1c2cb9679468a40480ac"} Nov 28 21:14:04 crc kubenswrapper[4957]: I1128 21:14:04.320978 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.32093271 podStartE2EDuration="2.32093271s" podCreationTimestamp="2025-11-28 21:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:14:04.311761095 +0000 UTC m=+1483.780409004" watchObservedRunningTime="2025-11-28 21:14:04.32093271 +0000 UTC m=+1483.789580629" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.072539 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.150746 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab395a88-867c-404e-9284-4d8dc3d78a41-logs\") pod \"ab395a88-867c-404e-9284-4d8dc3d78a41\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.150832 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whlzp\" (UniqueName: \"kubernetes.io/projected/ab395a88-867c-404e-9284-4d8dc3d78a41-kube-api-access-whlzp\") pod \"ab395a88-867c-404e-9284-4d8dc3d78a41\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.150947 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-nova-metadata-tls-certs\") pod \"ab395a88-867c-404e-9284-4d8dc3d78a41\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.151085 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-combined-ca-bundle\") pod \"ab395a88-867c-404e-9284-4d8dc3d78a41\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.151139 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-config-data\") pod \"ab395a88-867c-404e-9284-4d8dc3d78a41\" (UID: \"ab395a88-867c-404e-9284-4d8dc3d78a41\") " Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.151928 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab395a88-867c-404e-9284-4d8dc3d78a41-logs" (OuterVolumeSpecName: "logs") pod "ab395a88-867c-404e-9284-4d8dc3d78a41" (UID: "ab395a88-867c-404e-9284-4d8dc3d78a41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.160961 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab395a88-867c-404e-9284-4d8dc3d78a41-kube-api-access-whlzp" (OuterVolumeSpecName: "kube-api-access-whlzp") pod "ab395a88-867c-404e-9284-4d8dc3d78a41" (UID: "ab395a88-867c-404e-9284-4d8dc3d78a41"). InnerVolumeSpecName "kube-api-access-whlzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.197522 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-config-data" (OuterVolumeSpecName: "config-data") pod "ab395a88-867c-404e-9284-4d8dc3d78a41" (UID: "ab395a88-867c-404e-9284-4d8dc3d78a41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.199951 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab395a88-867c-404e-9284-4d8dc3d78a41" (UID: "ab395a88-867c-404e-9284-4d8dc3d78a41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.260520 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.260545 4957 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab395a88-867c-404e-9284-4d8dc3d78a41-logs\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.260554 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whlzp\" (UniqueName: \"kubernetes.io/projected/ab395a88-867c-404e-9284-4d8dc3d78a41-kube-api-access-whlzp\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.260563 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.260659 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab395a88-867c-404e-9284-4d8dc3d78a41" (UID: "ab395a88-867c-404e-9284-4d8dc3d78a41"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.305616 4957 generic.go:334] "Generic (PLEG): container finished" podID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerID="3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11" exitCode=0 Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.306811 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.307873 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab395a88-867c-404e-9284-4d8dc3d78a41","Type":"ContainerDied","Data":"3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11"} Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.307903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab395a88-867c-404e-9284-4d8dc3d78a41","Type":"ContainerDied","Data":"2b092a093002437fe91f38343e99b5a4755bfbf60192173b9c1dbbddde049b8d"} Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.308019 4957 scope.go:117] "RemoveContainer" containerID="3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.350528 4957 scope.go:117] "RemoveContainer" containerID="3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.351450 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.363974 4957 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab395a88-867c-404e-9284-4d8dc3d78a41-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.373262 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.374704 4957 scope.go:117] "RemoveContainer" containerID="3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11" Nov 28 21:14:05 crc kubenswrapper[4957]: E1128 21:14:05.375304 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11\": container with ID starting with 3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11 not found: ID does not exist" containerID="3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.375338 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11"} err="failed to get container status \"3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11\": rpc error: code = NotFound desc = could not find container \"3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11\": container with ID starting with 3200c6e5d256bc10403f04118275f7d516a79b96b7cc3c14634a232092e5ac11 not found: ID does not exist" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.375363 4957 scope.go:117] "RemoveContainer" containerID="3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db" Nov 28 21:14:05 crc kubenswrapper[4957]: E1128 21:14:05.375623 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db\": container with ID starting with 3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db not found: ID does not exist" containerID="3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.375669 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db"} err="failed to get container status \"3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db\": rpc error: code = NotFound desc = could not find container \"3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db\": container with ID starting with 3f590e6ec4f94413787a0a0e421be079bef4c4a2171b5907ee32c39b0417c5db not found: ID does not exist" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.394775 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:14:05 crc kubenswrapper[4957]: E1128 21:14:05.395271 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-log" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.395284 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-log" Nov 28 21:14:05 crc kubenswrapper[4957]: E1128 21:14:05.395342 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-metadata" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.395348 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-metadata" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.395601 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-log" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.395612 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" containerName="nova-metadata-metadata" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.396971 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.401336 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.401439 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.418559 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.569571 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27892901-c588-481e-8b3c-363e2128f7d3-logs\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.569676 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krtx\" (UniqueName: \"kubernetes.io/projected/27892901-c588-481e-8b3c-363e2128f7d3-kube-api-access-4krtx\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.569722 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.569808 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.569925 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-config-data\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.672528 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.672700 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-config-data\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.672804 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27892901-c588-481e-8b3c-363e2128f7d3-logs\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.672874 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krtx\" (UniqueName: \"kubernetes.io/projected/27892901-c588-481e-8b3c-363e2128f7d3-kube-api-access-4krtx\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.672912 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.674245 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27892901-c588-481e-8b3c-363e2128f7d3-logs\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.678092 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-config-data\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.680451 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.680591 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/27892901-c588-481e-8b3c-363e2128f7d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.696041 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krtx\" (UniqueName: \"kubernetes.io/projected/27892901-c588-481e-8b3c-363e2128f7d3-kube-api-access-4krtx\") pod \"nova-metadata-0\" (UID: \"27892901-c588-481e-8b3c-363e2128f7d3\") " pod="openstack/nova-metadata-0" Nov 28 21:14:05 crc kubenswrapper[4957]: I1128 21:14:05.715163 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.234629 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 21:14:06 crc kubenswrapper[4957]: W1128 21:14:06.238597 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27892901_c588_481e_8b3c_363e2128f7d3.slice/crio-5571c7a6fb56de0b083b18790570e452ea44517b7585e38ee36afe88de2f9180 WatchSource:0}: Error finding container 5571c7a6fb56de0b083b18790570e452ea44517b7585e38ee36afe88de2f9180: Status 404 returned error can't find the container with id 5571c7a6fb56de0b083b18790570e452ea44517b7585e38ee36afe88de2f9180 Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.321434 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27892901-c588-481e-8b3c-363e2128f7d3","Type":"ContainerStarted","Data":"5571c7a6fb56de0b083b18790570e452ea44517b7585e38ee36afe88de2f9180"} Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.326795 4957 generic.go:334] "Generic (PLEG): container finished" podID="59df07c4-3c97-44c0-b83f-bd70e39ba203" containerID="f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9" exitCode=0 Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.326882 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59df07c4-3c97-44c0-b83f-bd70e39ba203","Type":"ContainerDied","Data":"f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9"} Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.691270 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.802491 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-config-data\") pod \"59df07c4-3c97-44c0-b83f-bd70e39ba203\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.802524 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-combined-ca-bundle\") pod \"59df07c4-3c97-44c0-b83f-bd70e39ba203\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.802720 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x4jp\" (UniqueName: \"kubernetes.io/projected/59df07c4-3c97-44c0-b83f-bd70e39ba203-kube-api-access-7x4jp\") pod \"59df07c4-3c97-44c0-b83f-bd70e39ba203\" (UID: \"59df07c4-3c97-44c0-b83f-bd70e39ba203\") " Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.811478 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59df07c4-3c97-44c0-b83f-bd70e39ba203-kube-api-access-7x4jp" (OuterVolumeSpecName: "kube-api-access-7x4jp") pod "59df07c4-3c97-44c0-b83f-bd70e39ba203" (UID: "59df07c4-3c97-44c0-b83f-bd70e39ba203"). InnerVolumeSpecName "kube-api-access-7x4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.831732 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab395a88-867c-404e-9284-4d8dc3d78a41" path="/var/lib/kubelet/pods/ab395a88-867c-404e-9284-4d8dc3d78a41/volumes" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.842407 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-config-data" (OuterVolumeSpecName: "config-data") pod "59df07c4-3c97-44c0-b83f-bd70e39ba203" (UID: "59df07c4-3c97-44c0-b83f-bd70e39ba203"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.849335 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59df07c4-3c97-44c0-b83f-bd70e39ba203" (UID: "59df07c4-3c97-44c0-b83f-bd70e39ba203"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.905458 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x4jp\" (UniqueName: \"kubernetes.io/projected/59df07c4-3c97-44c0-b83f-bd70e39ba203-kube-api-access-7x4jp\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.905488 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:06 crc kubenswrapper[4957]: I1128 21:14:06.905498 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59df07c4-3c97-44c0-b83f-bd70e39ba203-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.342164 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27892901-c588-481e-8b3c-363e2128f7d3","Type":"ContainerStarted","Data":"632177f5e99afc43dd6506d2e74beabfcccf8ad00340f7061f1536b6600e8af9"} Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.342489 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27892901-c588-481e-8b3c-363e2128f7d3","Type":"ContainerStarted","Data":"4259660a68598edfb2d91fc9af99cda9ab6be4039223116ad1e2cc8d2ff66725"} Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.347552 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59df07c4-3c97-44c0-b83f-bd70e39ba203","Type":"ContainerDied","Data":"9da17ff814dae48c9343fa5fa4142a8d6347234a9e39adfd50d64d0d116c28e1"} Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.347581 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.347597 4957 scope.go:117] "RemoveContainer" containerID="f0358d329416d8079de5055d7388a77647d42b02bcfb8bff8f6af9794f830ee9" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.399502 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.399481423 podStartE2EDuration="2.399481423s" podCreationTimestamp="2025-11-28 21:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:14:07.379241595 +0000 UTC m=+1486.847889504" watchObservedRunningTime="2025-11-28 21:14:07.399481423 +0000 UTC m=+1486.868129342" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.422274 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.442417 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.467095 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:14:07 crc kubenswrapper[4957]: E1128 21:14:07.468088 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59df07c4-3c97-44c0-b83f-bd70e39ba203" containerName="nova-scheduler-scheduler" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.468167 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="59df07c4-3c97-44c0-b83f-bd70e39ba203" containerName="nova-scheduler-scheduler" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.468605 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="59df07c4-3c97-44c0-b83f-bd70e39ba203" containerName="nova-scheduler-scheduler" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.469662 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.471655 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.481618 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.622635 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f526d90-5313-4d45-a9d1-760dbf18440d-config-data\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.622779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f526d90-5313-4d45-a9d1-760dbf18440d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.622807 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4zl\" (UniqueName: \"kubernetes.io/projected/0f526d90-5313-4d45-a9d1-760dbf18440d-kube-api-access-qb4zl\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.724688 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f526d90-5313-4d45-a9d1-760dbf18440d-config-data\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.724801 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f526d90-5313-4d45-a9d1-760dbf18440d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.724826 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4zl\" (UniqueName: \"kubernetes.io/projected/0f526d90-5313-4d45-a9d1-760dbf18440d-kube-api-access-qb4zl\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.728535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f526d90-5313-4d45-a9d1-760dbf18440d-config-data\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.729938 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f526d90-5313-4d45-a9d1-760dbf18440d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.755304 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4zl\" (UniqueName: \"kubernetes.io/projected/0f526d90-5313-4d45-a9d1-760dbf18440d-kube-api-access-qb4zl\") pod \"nova-scheduler-0\" (UID: \"0f526d90-5313-4d45-a9d1-760dbf18440d\") " pod="openstack/nova-scheduler-0" Nov 28 21:14:07 crc kubenswrapper[4957]: I1128 21:14:07.794293 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.258531 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.366023 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f526d90-5313-4d45-a9d1-760dbf18440d","Type":"ContainerStarted","Data":"f281f4370afa66b4e0c4c2761490a465d6d0a598c66e6dbb982540ce02668688"} Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.827127 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59df07c4-3c97-44c0-b83f-bd70e39ba203" path="/var/lib/kubelet/pods/59df07c4-3c97-44c0-b83f-bd70e39ba203/volumes" Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.992962 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.993384 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.993442 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.994538 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f101a7233fc82a0da07c8fa09d39544890b7480c6753772c083a17bd3f35908d"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:14:08 crc kubenswrapper[4957]: I1128 21:14:08.994628 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://f101a7233fc82a0da07c8fa09d39544890b7480c6753772c083a17bd3f35908d" gracePeriod=600 Nov 28 21:14:09 crc kubenswrapper[4957]: I1128 21:14:09.379346 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f526d90-5313-4d45-a9d1-760dbf18440d","Type":"ContainerStarted","Data":"2d302411d9f49d87316035c5443f0d37515708802bde3bf014c1941d50514b0d"} Nov 28 21:14:09 crc kubenswrapper[4957]: I1128 21:14:09.382677 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="f101a7233fc82a0da07c8fa09d39544890b7480c6753772c083a17bd3f35908d" exitCode=0 Nov 28 21:14:09 crc kubenswrapper[4957]: I1128 21:14:09.382714 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"f101a7233fc82a0da07c8fa09d39544890b7480c6753772c083a17bd3f35908d"} Nov 28 21:14:09 crc kubenswrapper[4957]: I1128 21:14:09.382737 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5"} Nov 28 21:14:09 crc kubenswrapper[4957]: I1128 21:14:09.382756 4957 scope.go:117] "RemoveContainer" containerID="aa7dcf960732566934369f18786490e508e6fd20d84c21ca9c77aae13bfcc8d4" Nov 28 21:14:09 crc kubenswrapper[4957]: I1128 21:14:09.397999 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.397980216 podStartE2EDuration="2.397980216s" podCreationTimestamp="2025-11-28 21:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:14:09.395438854 +0000 UTC m=+1488.864086763" watchObservedRunningTime="2025-11-28 21:14:09.397980216 +0000 UTC m=+1488.866628125" Nov 28 21:14:10 crc kubenswrapper[4957]: I1128 21:14:10.715786 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 21:14:10 crc kubenswrapper[4957]: I1128 21:14:10.716124 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 21:14:12 crc kubenswrapper[4957]: I1128 21:14:12.794480 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 21:14:12 crc kubenswrapper[4957]: I1128 21:14:12.977183 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 21:14:12 crc kubenswrapper[4957]: I1128 21:14:12.977590 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 21:14:13 crc kubenswrapper[4957]: I1128 21:14:13.992368 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fd99b25-e3b2-439d-874c-6ae3351f9cea" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.250:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 21:14:13 crc kubenswrapper[4957]: I1128 21:14:13.992381 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fd99b25-e3b2-439d-874c-6ae3351f9cea" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.250:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.395546 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kz28d"] Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.398159 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.408125 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kz28d"] Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.479438 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvz84\" (UniqueName: \"kubernetes.io/projected/506a7f7c-fd88-4108-b965-f7c7000401fd-kube-api-access-nvz84\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.479623 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-utilities\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.479680 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-catalog-content\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.582185 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-catalog-content\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.582405 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvz84\" (UniqueName: \"kubernetes.io/projected/506a7f7c-fd88-4108-b965-f7c7000401fd-kube-api-access-nvz84\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.582564 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-utilities\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.583260 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-utilities\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.583891 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-catalog-content\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.624892 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvz84\" (UniqueName: \"kubernetes.io/projected/506a7f7c-fd88-4108-b965-f7c7000401fd-kube-api-access-nvz84\") pod \"community-operators-kz28d\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:14 crc kubenswrapper[4957]: I1128 21:14:14.744889 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:15 crc kubenswrapper[4957]: I1128 21:14:15.270076 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kz28d"] Nov 28 21:14:15 crc kubenswrapper[4957]: W1128 21:14:15.279121 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod506a7f7c_fd88_4108_b965_f7c7000401fd.slice/crio-750620a5980045dae8da8286f425694302df982d6ff88bc75c111ddd3f9e74c8 WatchSource:0}: Error finding container 750620a5980045dae8da8286f425694302df982d6ff88bc75c111ddd3f9e74c8: Status 404 returned error can't find the container with id 750620a5980045dae8da8286f425694302df982d6ff88bc75c111ddd3f9e74c8 Nov 28 21:14:15 crc kubenswrapper[4957]: I1128 21:14:15.451168 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz28d" event={"ID":"506a7f7c-fd88-4108-b965-f7c7000401fd","Type":"ContainerStarted","Data":"750620a5980045dae8da8286f425694302df982d6ff88bc75c111ddd3f9e74c8"} Nov 28 21:14:15 crc kubenswrapper[4957]: I1128 21:14:15.715433 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 21:14:15 crc kubenswrapper[4957]: I1128 21:14:15.717424 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 21:14:16 crc kubenswrapper[4957]: I1128 21:14:16.462401 4957 generic.go:334] "Generic (PLEG): container finished" podID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerID="b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4" exitCode=0 Nov 28 21:14:16 crc kubenswrapper[4957]: I1128 21:14:16.463046 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz28d" event={"ID":"506a7f7c-fd88-4108-b965-f7c7000401fd","Type":"ContainerDied","Data":"b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4"} Nov 28 21:14:16 crc kubenswrapper[4957]: I1128 21:14:16.726432 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="27892901-c588-481e-8b3c-363e2128f7d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 21:14:16 crc kubenswrapper[4957]: I1128 21:14:16.726994 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="27892901-c588-481e-8b3c-363e2128f7d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.251:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 21:14:17 crc kubenswrapper[4957]: I1128 21:14:17.794757 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 21:14:17 crc kubenswrapper[4957]: I1128 21:14:17.842419 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 21:14:18 crc kubenswrapper[4957]: I1128 21:14:18.489695 4957 generic.go:334] "Generic (PLEG): container finished" podID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerID="3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024" exitCode=0 Nov 28 21:14:18 crc kubenswrapper[4957]: I1128 21:14:18.489906 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz28d" event={"ID":"506a7f7c-fd88-4108-b965-f7c7000401fd","Type":"ContainerDied","Data":"3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024"} Nov 28 21:14:18 crc kubenswrapper[4957]: I1128 21:14:18.528546 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 21:14:19 crc kubenswrapper[4957]: I1128 21:14:19.503506 4957 generic.go:334] "Generic (PLEG): container finished" podID="553b04d2-b353-4a99-9c06-970275003669" containerID="2377c26828e7f46904cfb1bca4f5a5c893889560498fb8ebee1f78fc18b190cc" exitCode=137 Nov 28 21:14:19 crc kubenswrapper[4957]: I1128 21:14:19.503876 4957 generic.go:334] "Generic (PLEG): container finished" podID="553b04d2-b353-4a99-9c06-970275003669" containerID="eba61aab852bc348d91e75d244ca07d10746cf59bd178a9feb4a72d44ec5c86a" exitCode=137 Nov 28 21:14:19 crc kubenswrapper[4957]: I1128 21:14:19.503576 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerDied","Data":"2377c26828e7f46904cfb1bca4f5a5c893889560498fb8ebee1f78fc18b190cc"} Nov 28 21:14:19 crc kubenswrapper[4957]: I1128 21:14:19.503920 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerDied","Data":"eba61aab852bc348d91e75d244ca07d10746cf59bd178a9feb4a72d44ec5c86a"} Nov 28 21:14:19 crc kubenswrapper[4957]: I1128 21:14:19.955794 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.149592 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-config-data\") pod \"553b04d2-b353-4a99-9c06-970275003669\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.149675 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-scripts\") pod \"553b04d2-b353-4a99-9c06-970275003669\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.149926 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c9bz\" (UniqueName: \"kubernetes.io/projected/553b04d2-b353-4a99-9c06-970275003669-kube-api-access-9c9bz\") pod \"553b04d2-b353-4a99-9c06-970275003669\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.149962 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-combined-ca-bundle\") pod \"553b04d2-b353-4a99-9c06-970275003669\" (UID: \"553b04d2-b353-4a99-9c06-970275003669\") " Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.158340 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-scripts" (OuterVolumeSpecName: "scripts") pod "553b04d2-b353-4a99-9c06-970275003669" (UID: "553b04d2-b353-4a99-9c06-970275003669"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.168926 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553b04d2-b353-4a99-9c06-970275003669-kube-api-access-9c9bz" (OuterVolumeSpecName: "kube-api-access-9c9bz") pod "553b04d2-b353-4a99-9c06-970275003669" (UID: "553b04d2-b353-4a99-9c06-970275003669"). InnerVolumeSpecName "kube-api-access-9c9bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.252500 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.252536 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c9bz\" (UniqueName: \"kubernetes.io/projected/553b04d2-b353-4a99-9c06-970275003669-kube-api-access-9c9bz\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.297620 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-config-data" (OuterVolumeSpecName: "config-data") pod "553b04d2-b353-4a99-9c06-970275003669" (UID: "553b04d2-b353-4a99-9c06-970275003669"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.344620 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "553b04d2-b353-4a99-9c06-970275003669" (UID: "553b04d2-b353-4a99-9c06-970275003669"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.354117 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.354149 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553b04d2-b353-4a99-9c06-970275003669-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.518582 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz28d" event={"ID":"506a7f7c-fd88-4108-b965-f7c7000401fd","Type":"ContainerStarted","Data":"8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49"} Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.523250 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"553b04d2-b353-4a99-9c06-970275003669","Type":"ContainerDied","Data":"ad659547fbc9e9b023a33f5fe045df9a19e1b4cf859b53a8e970f9c9351ea537"} Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.523689 4957 scope.go:117] "RemoveContainer" containerID="2377c26828e7f46904cfb1bca4f5a5c893889560498fb8ebee1f78fc18b190cc" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.523863 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.545381 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kz28d" podStartSLOduration=3.913978526 podStartE2EDuration="6.545362532s" podCreationTimestamp="2025-11-28 21:14:14 +0000 UTC" firstStartedPulling="2025-11-28 21:14:16.464980084 +0000 UTC m=+1495.933627993" lastFinishedPulling="2025-11-28 21:14:19.09636409 +0000 UTC m=+1498.565011999" observedRunningTime="2025-11-28 21:14:20.534572756 +0000 UTC m=+1500.003220665" watchObservedRunningTime="2025-11-28 21:14:20.545362532 +0000 UTC m=+1500.014010441" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.570961 4957 scope.go:117] "RemoveContainer" containerID="eba61aab852bc348d91e75d244ca07d10746cf59bd178a9feb4a72d44ec5c86a" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.573126 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.598422 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.602347 4957 scope.go:117] "RemoveContainer" containerID="89ff5575678594b11d39d0d5027b0c483aca85471fa71eb82a87780eee67ad06" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.631457 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 21:14:20 crc kubenswrapper[4957]: E1128 21:14:20.632304 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-listener" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.632382 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-listener" Nov 28 21:14:20 crc kubenswrapper[4957]: E1128 21:14:20.632456 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-evaluator" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.632510 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-evaluator" Nov 28 21:14:20 crc kubenswrapper[4957]: E1128 21:14:20.632577 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-api" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.632624 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-api" Nov 28 21:14:20 crc kubenswrapper[4957]: E1128 21:14:20.632696 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-notifier" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.632746 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-notifier" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.633018 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-evaluator" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.633098 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-api" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.633167 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-listener" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.633301 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="553b04d2-b353-4a99-9c06-970275003669" containerName="aodh-notifier" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.635654 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.637519 4957 scope.go:117] "RemoveContainer" containerID="b4402c9eee37ad3fda5f7b5254609f0e6fd906d86cc2810b480c30b45880179a" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.642682 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.642687 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-72swb" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.642973 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.643665 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.643847 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.646252 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.769058 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-internal-tls-certs\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.769101 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.769154 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-public-tls-certs\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.769178 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-config-data\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.769397 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59mkf\" (UniqueName: \"kubernetes.io/projected/1bbe71a6-9b0b-4d76-a004-6facaa044521-kube-api-access-59mkf\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.770685 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-scripts\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.827573 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553b04d2-b353-4a99-9c06-970275003669" path="/var/lib/kubelet/pods/553b04d2-b353-4a99-9c06-970275003669/volumes" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.873985 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-scripts\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.874255 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-internal-tls-certs\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.874281 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.874389 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-public-tls-certs\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.874422 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-config-data\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.874496 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59mkf\" (UniqueName: \"kubernetes.io/projected/1bbe71a6-9b0b-4d76-a004-6facaa044521-kube-api-access-59mkf\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.876540 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.876725 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.877517 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.878245 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.880985 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.887861 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-scripts\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.888047 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-public-tls-certs\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.888921 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-internal-tls-certs\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.889378 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-config-data\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.892086 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59mkf\" (UniqueName: \"kubernetes.io/projected/1bbe71a6-9b0b-4d76-a004-6facaa044521-kube-api-access-59mkf\") pod \"aodh-0\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " pod="openstack/aodh-0" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.970585 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-72swb" Nov 28 21:14:20 crc kubenswrapper[4957]: I1128 21:14:20.978628 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:14:21 crc kubenswrapper[4957]: I1128 21:14:21.493076 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 21:14:21 crc kubenswrapper[4957]: I1128 21:14:21.535713 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerStarted","Data":"09b7d3d21bf5d30c7598e90bcce18b52c53480ee6726e127b037eca88770257b"} Nov 28 21:14:22 crc kubenswrapper[4957]: I1128 21:14:22.548353 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerStarted","Data":"0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe"} Nov 28 21:14:22 crc kubenswrapper[4957]: I1128 21:14:22.987516 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 21:14:22 crc kubenswrapper[4957]: I1128 21:14:22.987984 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 21:14:22 crc kubenswrapper[4957]: I1128 21:14:22.988134 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 21:14:23 crc kubenswrapper[4957]: I1128 21:14:23.001557 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 21:14:23 crc kubenswrapper[4957]: I1128 21:14:23.575768 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerStarted","Data":"03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744"} Nov 28 21:14:23 crc kubenswrapper[4957]: I1128 21:14:23.576188 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 21:14:23 crc kubenswrapper[4957]: I1128 21:14:23.586091 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 21:14:24 crc kubenswrapper[4957]: I1128 21:14:24.590972 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerStarted","Data":"69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47"} Nov 28 21:14:24 crc kubenswrapper[4957]: I1128 21:14:24.591409 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerStarted","Data":"8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471"} Nov 28 21:14:24 crc kubenswrapper[4957]: I1128 21:14:24.625042 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.888170932 podStartE2EDuration="4.625021561s" podCreationTimestamp="2025-11-28 21:14:20 +0000 UTC" firstStartedPulling="2025-11-28 21:14:21.497627518 +0000 UTC m=+1500.966275427" lastFinishedPulling="2025-11-28 21:14:24.234478107 +0000 UTC m=+1503.703126056" observedRunningTime="2025-11-28 21:14:24.614091342 +0000 UTC m=+1504.082739261" watchObservedRunningTime="2025-11-28 21:14:24.625021561 +0000 UTC m=+1504.093669470" Nov 28 21:14:24 crc kubenswrapper[4957]: I1128 21:14:24.745131 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:24 crc kubenswrapper[4957]: I1128 21:14:24.745186 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:24 crc kubenswrapper[4957]: I1128 21:14:24.846575 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:25 crc kubenswrapper[4957]: I1128 21:14:25.657992 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:25 crc kubenswrapper[4957]: I1128 21:14:25.716102 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kz28d"] Nov 28 21:14:25 crc kubenswrapper[4957]: I1128 21:14:25.719814 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 21:14:25 crc kubenswrapper[4957]: I1128 21:14:25.721424 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 21:14:25 crc kubenswrapper[4957]: I1128 21:14:25.724307 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 21:14:26 crc kubenswrapper[4957]: I1128 21:14:26.617689 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 21:14:27 crc kubenswrapper[4957]: I1128 21:14:27.632838 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kz28d" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="registry-server" containerID="cri-o://8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49" gracePeriod=2 Nov 28 21:14:27 crc kubenswrapper[4957]: I1128 21:14:27.900744 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.192697 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.276686 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-utilities\") pod \"506a7f7c-fd88-4108-b965-f7c7000401fd\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.276980 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-catalog-content\") pod \"506a7f7c-fd88-4108-b965-f7c7000401fd\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.280481 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvz84\" (UniqueName: \"kubernetes.io/projected/506a7f7c-fd88-4108-b965-f7c7000401fd-kube-api-access-nvz84\") pod \"506a7f7c-fd88-4108-b965-f7c7000401fd\" (UID: \"506a7f7c-fd88-4108-b965-f7c7000401fd\") " Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.277564 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-utilities" (OuterVolumeSpecName: "utilities") pod "506a7f7c-fd88-4108-b965-f7c7000401fd" (UID: "506a7f7c-fd88-4108-b965-f7c7000401fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.281256 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.287053 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506a7f7c-fd88-4108-b965-f7c7000401fd-kube-api-access-nvz84" (OuterVolumeSpecName: "kube-api-access-nvz84") pod "506a7f7c-fd88-4108-b965-f7c7000401fd" (UID: "506a7f7c-fd88-4108-b965-f7c7000401fd"). InnerVolumeSpecName "kube-api-access-nvz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.330381 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "506a7f7c-fd88-4108-b965-f7c7000401fd" (UID: "506a7f7c-fd88-4108-b965-f7c7000401fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.383286 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/506a7f7c-fd88-4108-b965-f7c7000401fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.383679 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvz84\" (UniqueName: \"kubernetes.io/projected/506a7f7c-fd88-4108-b965-f7c7000401fd-kube-api-access-nvz84\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.645231 4957 generic.go:334] "Generic (PLEG): container finished" podID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerID="8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49" exitCode=0 Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.645315 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz28d" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.645336 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz28d" event={"ID":"506a7f7c-fd88-4108-b965-f7c7000401fd","Type":"ContainerDied","Data":"8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49"} Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.646586 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz28d" event={"ID":"506a7f7c-fd88-4108-b965-f7c7000401fd","Type":"ContainerDied","Data":"750620a5980045dae8da8286f425694302df982d6ff88bc75c111ddd3f9e74c8"} Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.646622 4957 scope.go:117] "RemoveContainer" containerID="8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.684002 4957 scope.go:117] "RemoveContainer" containerID="3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.713854 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kz28d"] Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.731521 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kz28d"] Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.738886 4957 scope.go:117] "RemoveContainer" containerID="b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.767086 4957 scope.go:117] "RemoveContainer" containerID="8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49" Nov 28 21:14:28 crc kubenswrapper[4957]: E1128 21:14:28.767523 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49\": container with ID starting with 8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49 not found: ID does not exist" containerID="8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.767577 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49"} err="failed to get container status \"8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49\": rpc error: code = NotFound desc = could not find container \"8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49\": container with ID starting with 8d1e375e0bee271c8c24dedf1777f589afe33fef0ebf35600bf919eca53d5e49 not found: ID does not exist" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.767611 4957 scope.go:117] "RemoveContainer" containerID="3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024" Nov 28 21:14:28 crc kubenswrapper[4957]: E1128 21:14:28.768022 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024\": container with ID starting with 3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024 not found: ID does not exist" containerID="3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.768051 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024"} err="failed to get container status \"3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024\": rpc error: code = NotFound desc = could not find container \"3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024\": container with ID starting with 3ce7d11e8e5ba6ac80013dab5be8239576604412b9ed17f131eda41e42f75024 not found: ID does not exist" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.768072 4957 scope.go:117] "RemoveContainer" containerID="b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4" Nov 28 21:14:28 crc kubenswrapper[4957]: E1128 21:14:28.768379 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4\": container with ID starting with b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4 not found: ID does not exist" containerID="b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.768399 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4"} err="failed to get container status \"b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4\": rpc error: code = NotFound desc = could not find container \"b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4\": container with ID starting with b6dd22749485405240c6c2b85c2bf0821440894799e76c03a91512481a8808b4 not found: ID does not exist" Nov 28 21:14:28 crc kubenswrapper[4957]: I1128 21:14:28.827055 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" path="/var/lib/kubelet/pods/506a7f7c-fd88-4108-b965-f7c7000401fd/volumes" Nov 28 21:14:32 crc kubenswrapper[4957]: I1128 21:14:32.947422 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:14:32 crc kubenswrapper[4957]: I1128 21:14:32.951600 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="12d0ff1a-6220-432a-bc8a-f611c2e6996d" containerName="kube-state-metrics" containerID="cri-o://7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e" gracePeriod=30 Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.057742 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.058186 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="763de9cf-5d74-4977-b6d5-53430185b17b" containerName="mysqld-exporter" containerID="cri-o://6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a" gracePeriod=30 Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.640065 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.649597 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.707285 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-config-data\") pod \"763de9cf-5d74-4977-b6d5-53430185b17b\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.707427 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-combined-ca-bundle\") pod \"763de9cf-5d74-4977-b6d5-53430185b17b\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.707465 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtp9j\" (UniqueName: \"kubernetes.io/projected/763de9cf-5d74-4977-b6d5-53430185b17b-kube-api-access-qtp9j\") pod \"763de9cf-5d74-4977-b6d5-53430185b17b\" (UID: \"763de9cf-5d74-4977-b6d5-53430185b17b\") " Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.707520 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btq74\" (UniqueName: \"kubernetes.io/projected/12d0ff1a-6220-432a-bc8a-f611c2e6996d-kube-api-access-btq74\") pod \"12d0ff1a-6220-432a-bc8a-f611c2e6996d\" (UID: \"12d0ff1a-6220-432a-bc8a-f611c2e6996d\") " Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.708111 4957 generic.go:334] "Generic (PLEG): container finished" podID="763de9cf-5d74-4977-b6d5-53430185b17b" containerID="6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a" exitCode=2 Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.708146 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"763de9cf-5d74-4977-b6d5-53430185b17b","Type":"ContainerDied","Data":"6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a"} Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.708190 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"763de9cf-5d74-4977-b6d5-53430185b17b","Type":"ContainerDied","Data":"9e89f316ece9500e28ccd89e1be23fc27b76246e889e5a8a9011b767362492a3"} Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.708197 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.708222 4957 scope.go:117] "RemoveContainer" containerID="6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.712724 4957 generic.go:334] "Generic (PLEG): container finished" podID="12d0ff1a-6220-432a-bc8a-f611c2e6996d" containerID="7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e" exitCode=2 Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.712761 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12d0ff1a-6220-432a-bc8a-f611c2e6996d","Type":"ContainerDied","Data":"7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e"} Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.712786 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12d0ff1a-6220-432a-bc8a-f611c2e6996d","Type":"ContainerDied","Data":"4e8815ddd5db6aa9aa4e8207485727867f23077b06d284f70d2f2dda2ac55399"} Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.712832 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.713157 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763de9cf-5d74-4977-b6d5-53430185b17b-kube-api-access-qtp9j" (OuterVolumeSpecName: "kube-api-access-qtp9j") pod "763de9cf-5d74-4977-b6d5-53430185b17b" (UID: "763de9cf-5d74-4977-b6d5-53430185b17b"). InnerVolumeSpecName "kube-api-access-qtp9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.717779 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d0ff1a-6220-432a-bc8a-f611c2e6996d-kube-api-access-btq74" (OuterVolumeSpecName: "kube-api-access-btq74") pod "12d0ff1a-6220-432a-bc8a-f611c2e6996d" (UID: "12d0ff1a-6220-432a-bc8a-f611c2e6996d"). InnerVolumeSpecName "kube-api-access-btq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.734267 4957 scope.go:117] "RemoveContainer" containerID="6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a" Nov 28 21:14:33 crc kubenswrapper[4957]: E1128 21:14:33.734715 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a\": container with ID starting with 6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a not found: ID does not exist" containerID="6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.734760 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a"} err="failed to get container status \"6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a\": rpc error: code = NotFound desc = could not find container \"6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a\": container with ID starting with 6aaaadef1162121f6778cbbcbfe557586d6e15cbee7c79a82221c9d8c68a360a not found: ID does not exist" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.734788 4957 scope.go:117] "RemoveContainer" containerID="7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.752050 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "763de9cf-5d74-4977-b6d5-53430185b17b" (UID: "763de9cf-5d74-4977-b6d5-53430185b17b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.760366 4957 scope.go:117] "RemoveContainer" containerID="7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e" Nov 28 21:14:33 crc kubenswrapper[4957]: E1128 21:14:33.760900 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e\": container with ID starting with 7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e not found: ID does not exist" containerID="7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.760943 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e"} err="failed to get container status \"7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e\": rpc error: code = NotFound desc = could not find container \"7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e\": container with ID starting with 7ee4b9ddefeb9ad5f63b21913cdad39bdc4e0651062c6ec3dc3ba9c17354b90e not found: ID does not exist" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.777615 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-config-data" (OuterVolumeSpecName: "config-data") pod "763de9cf-5d74-4977-b6d5-53430185b17b" (UID: "763de9cf-5d74-4977-b6d5-53430185b17b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.810963 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtp9j\" (UniqueName: \"kubernetes.io/projected/763de9cf-5d74-4977-b6d5-53430185b17b-kube-api-access-qtp9j\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.811000 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btq74\" (UniqueName: \"kubernetes.io/projected/12d0ff1a-6220-432a-bc8a-f611c2e6996d-kube-api-access-btq74\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.811010 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:33 crc kubenswrapper[4957]: I1128 21:14:33.811019 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/763de9cf-5d74-4977-b6d5-53430185b17b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.041921 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.061512 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.073842 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.089522 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.100005 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: E1128 21:14:34.100766 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="extract-utilities" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.100793 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="extract-utilities" Nov 28 21:14:34 crc kubenswrapper[4957]: E1128 21:14:34.100840 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="extract-content" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.100850 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="extract-content" Nov 28 21:14:34 crc kubenswrapper[4957]: E1128 21:14:34.100883 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d0ff1a-6220-432a-bc8a-f611c2e6996d" containerName="kube-state-metrics" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.100893 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d0ff1a-6220-432a-bc8a-f611c2e6996d" containerName="kube-state-metrics" Nov 28 21:14:34 crc kubenswrapper[4957]: E1128 21:14:34.100908 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="registry-server" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.100915 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="registry-server" Nov 28 21:14:34 crc kubenswrapper[4957]: E1128 21:14:34.100937 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763de9cf-5d74-4977-b6d5-53430185b17b" containerName="mysqld-exporter" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.100950 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="763de9cf-5d74-4977-b6d5-53430185b17b" containerName="mysqld-exporter" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.101334 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="763de9cf-5d74-4977-b6d5-53430185b17b" containerName="mysqld-exporter" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.101364 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d0ff1a-6220-432a-bc8a-f611c2e6996d" containerName="kube-state-metrics" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.101387 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="506a7f7c-fd88-4108-b965-f7c7000401fd" containerName="registry-server" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.102446 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.104544 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.105217 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.111452 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.113009 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.115795 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.116849 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.116894 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.117059 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-config-data\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.117146 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnb78\" (UniqueName: \"kubernetes.io/projected/448e0773-f22b-417a-a4b4-3434881c628f-kube-api-access-hnb78\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.119501 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.124549 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.133787 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.218925 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.218980 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd466\" (UniqueName: \"kubernetes.io/projected/7d861b88-8080-411b-8c34-ae277a73b580-kube-api-access-fd466\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.219113 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-config-data\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.219186 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnb78\" (UniqueName: \"kubernetes.io/projected/448e0773-f22b-417a-a4b4-3434881c628f-kube-api-access-hnb78\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.219292 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.219322 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.219346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.219369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.223503 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.223533 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-config-data\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.231035 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/448e0773-f22b-417a-a4b4-3434881c628f-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.239977 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnb78\" (UniqueName: \"kubernetes.io/projected/448e0773-f22b-417a-a4b4-3434881c628f-kube-api-access-hnb78\") pod \"mysqld-exporter-0\" (UID: \"448e0773-f22b-417a-a4b4-3434881c628f\") " pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.321285 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.321346 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.321425 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.321449 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd466\" (UniqueName: \"kubernetes.io/projected/7d861b88-8080-411b-8c34-ae277a73b580-kube-api-access-fd466\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.324724 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.324964 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.326087 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d861b88-8080-411b-8c34-ae277a73b580-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.339464 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd466\" (UniqueName: \"kubernetes.io/projected/7d861b88-8080-411b-8c34-ae277a73b580-kube-api-access-fd466\") pod \"kube-state-metrics-0\" (UID: \"7d861b88-8080-411b-8c34-ae277a73b580\") " pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.477690 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.489153 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.830835 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d0ff1a-6220-432a-bc8a-f611c2e6996d" path="/var/lib/kubelet/pods/12d0ff1a-6220-432a-bc8a-f611c2e6996d/volumes" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.833522 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763de9cf-5d74-4977-b6d5-53430185b17b" path="/var/lib/kubelet/pods/763de9cf-5d74-4977-b6d5-53430185b17b/volumes" Nov 28 21:14:34 crc kubenswrapper[4957]: I1128 21:14:34.998271 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 21:14:35 crc kubenswrapper[4957]: W1128 21:14:35.005621 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod448e0773_f22b_417a_a4b4_3434881c628f.slice/crio-a499115f8d7b4fa20f27e7031c06aa9af726d011a342db372ee83de20aa4f405 WatchSource:0}: Error finding container a499115f8d7b4fa20f27e7031c06aa9af726d011a342db372ee83de20aa4f405: Status 404 returned error can't find the container with id a499115f8d7b4fa20f27e7031c06aa9af726d011a342db372ee83de20aa4f405 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.014683 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.183869 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.184133 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-central-agent" containerID="cri-o://ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f" gracePeriod=30 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.184191 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="proxy-httpd" containerID="cri-o://dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129" gracePeriod=30 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.184226 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="sg-core" containerID="cri-o://4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303" gracePeriod=30 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.184252 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-notification-agent" containerID="cri-o://c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166" gracePeriod=30 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.749055 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d861b88-8080-411b-8c34-ae277a73b580","Type":"ContainerStarted","Data":"fe4401c6c1276941f463609d355e8501649eae1fd6cb8f2f508d1160e9358e14"} Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.749606 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d861b88-8080-411b-8c34-ae277a73b580","Type":"ContainerStarted","Data":"348c9e2088ae24a54c6a6d3a3787325c160972ddbef102c5584a059f6e31c615"} Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.749625 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.750755 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"448e0773-f22b-417a-a4b4-3434881c628f","Type":"ContainerStarted","Data":"a499115f8d7b4fa20f27e7031c06aa9af726d011a342db372ee83de20aa4f405"} Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.754342 4957 generic.go:334] "Generic (PLEG): container finished" podID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerID="dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129" exitCode=0 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.754369 4957 generic.go:334] "Generic (PLEG): container finished" podID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerID="4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303" exitCode=2 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.754377 4957 generic.go:334] "Generic (PLEG): container finished" podID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerID="ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f" exitCode=0 Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.754412 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerDied","Data":"dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129"} Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.754478 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerDied","Data":"4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303"} Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.754497 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerDied","Data":"ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f"} Nov 28 21:14:35 crc kubenswrapper[4957]: I1128 21:14:35.779707 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.344082723 podStartE2EDuration="1.779686155s" podCreationTimestamp="2025-11-28 21:14:34 +0000 UTC" firstStartedPulling="2025-11-28 21:14:35.002264738 +0000 UTC m=+1514.470912647" lastFinishedPulling="2025-11-28 21:14:35.43786817 +0000 UTC m=+1514.906516079" observedRunningTime="2025-11-28 21:14:35.764635065 +0000 UTC m=+1515.233282974" watchObservedRunningTime="2025-11-28 21:14:35.779686155 +0000 UTC m=+1515.248334064" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.582416 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685193 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-sg-core-conf-yaml\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685396 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-run-httpd\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685452 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vbxl\" (UniqueName: \"kubernetes.io/projected/69d6aae3-d098-4f75-8335-f86d900a41ce-kube-api-access-6vbxl\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685479 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-config-data\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685504 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-combined-ca-bundle\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685524 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-log-httpd\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.685610 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-scripts\") pod \"69d6aae3-d098-4f75-8335-f86d900a41ce\" (UID: \"69d6aae3-d098-4f75-8335-f86d900a41ce\") " Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.692736 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.692746 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.693656 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d6aae3-d098-4f75-8335-f86d900a41ce-kube-api-access-6vbxl" (OuterVolumeSpecName: "kube-api-access-6vbxl") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "kube-api-access-6vbxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.696433 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-scripts" (OuterVolumeSpecName: "scripts") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.739145 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.767687 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"448e0773-f22b-417a-a4b4-3434881c628f","Type":"ContainerStarted","Data":"a19dbe7efcda961a45dd71763d7289a564328e5548161eff38cb6e5f77a173a2"} Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.776685 4957 generic.go:334] "Generic (PLEG): container finished" podID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerID="c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166" exitCode=0 Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.776773 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.776785 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerDied","Data":"c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166"} Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.776822 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d6aae3-d098-4f75-8335-f86d900a41ce","Type":"ContainerDied","Data":"28c2d16436e31478fad9e2713452f70f20001dfb838e667fa267fd7544cbc9d7"} Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.776842 4957 scope.go:117] "RemoveContainer" containerID="dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.792580 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.793665 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.062721544 podStartE2EDuration="2.793641898s" podCreationTimestamp="2025-11-28 21:14:34 +0000 UTC" firstStartedPulling="2025-11-28 21:14:35.008818359 +0000 UTC m=+1514.477466268" lastFinishedPulling="2025-11-28 21:14:35.739738713 +0000 UTC m=+1515.208386622" observedRunningTime="2025-11-28 21:14:36.780164227 +0000 UTC m=+1516.248812146" watchObservedRunningTime="2025-11-28 21:14:36.793641898 +0000 UTC m=+1516.262289817" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.798518 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.798555 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.798570 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.798583 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vbxl\" (UniqueName: \"kubernetes.io/projected/69d6aae3-d098-4f75-8335-f86d900a41ce-kube-api-access-6vbxl\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.798596 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.798608 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d6aae3-d098-4f75-8335-f86d900a41ce-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.823463 4957 scope.go:117] "RemoveContainer" containerID="4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.826794 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-config-data" (OuterVolumeSpecName: "config-data") pod "69d6aae3-d098-4f75-8335-f86d900a41ce" (UID: "69d6aae3-d098-4f75-8335-f86d900a41ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.850142 4957 scope.go:117] "RemoveContainer" containerID="c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.881731 4957 scope.go:117] "RemoveContainer" containerID="ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.901914 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d6aae3-d098-4f75-8335-f86d900a41ce-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.919161 4957 scope.go:117] "RemoveContainer" containerID="dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129" Nov 28 21:14:36 crc kubenswrapper[4957]: E1128 21:14:36.922364 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129\": container with ID starting with dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129 not found: ID does not exist" containerID="dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.922418 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129"} err="failed to get container status \"dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129\": rpc error: code = NotFound desc = could not find container \"dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129\": container with ID starting with dba352ea6b78530bf43434866d26ae02b6380def296b8d9992a6dc13568c3129 not found: ID does not exist" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.922450 4957 scope.go:117] "RemoveContainer" containerID="4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303" Nov 28 21:14:36 crc kubenswrapper[4957]: E1128 21:14:36.926316 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303\": container with ID starting with 4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303 not found: ID does not exist" containerID="4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.926351 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303"} err="failed to get container status \"4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303\": rpc error: code = NotFound desc = could not find container \"4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303\": container with ID starting with 4ee8f09305637c46c90e0fae470c1ec38ffeb917e53948ae940aee55a0031303 not found: ID does not exist" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.926372 4957 scope.go:117] "RemoveContainer" containerID="c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166" Nov 28 21:14:36 crc kubenswrapper[4957]: E1128 21:14:36.930381 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166\": container with ID starting with c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166 not found: ID does not exist" containerID="c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.930415 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166"} err="failed to get container status \"c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166\": rpc error: code = NotFound desc = could not find container \"c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166\": container with ID starting with c92827a55922891f985c895164871a5e0731d5365fc30b936b1f82a461ed4166 not found: ID does not exist" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.930434 4957 scope.go:117] "RemoveContainer" containerID="ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f" Nov 28 21:14:36 crc kubenswrapper[4957]: E1128 21:14:36.934336 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f\": container with ID starting with ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f not found: ID does not exist" containerID="ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f" Nov 28 21:14:36 crc kubenswrapper[4957]: I1128 21:14:36.934379 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f"} err="failed to get container status \"ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f\": rpc error: code = NotFound desc = could not find container \"ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f\": container with ID starting with ef7596cead048b0ad51f2faa09e15a426ca9ea0b17a148ef2e33e6169e0ca46f not found: ID does not exist" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.149263 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.159055 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.178380 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:37 crc kubenswrapper[4957]: E1128 21:14:37.178820 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-central-agent" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.178837 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-central-agent" Nov 28 21:14:37 crc kubenswrapper[4957]: E1128 21:14:37.178851 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="proxy-httpd" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.178858 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="proxy-httpd" Nov 28 21:14:37 crc kubenswrapper[4957]: E1128 21:14:37.178865 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-notification-agent" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.178871 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-notification-agent" Nov 28 21:14:37 crc kubenswrapper[4957]: E1128 21:14:37.178894 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="sg-core" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.178900 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="sg-core" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.179135 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="proxy-httpd" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.179149 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-notification-agent" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.179156 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="ceilometer-central-agent" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.179168 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" containerName="sg-core" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.181094 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.186517 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.186644 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.196946 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.199408 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.310235 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-config-data\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.310307 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4kr\" (UniqueName: \"kubernetes.io/projected/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-kube-api-access-zk4kr\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.310334 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.310354 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-run-httpd\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.310602 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.310990 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-scripts\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.311046 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-log-httpd\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.311356 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413409 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-config-data\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413494 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4kr\" (UniqueName: \"kubernetes.io/projected/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-kube-api-access-zk4kr\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413526 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413546 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-run-httpd\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413592 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-scripts\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413676 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-log-httpd\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.413736 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.415331 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-log-httpd\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.415360 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-run-httpd\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.417575 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.417874 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-config-data\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.419477 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.420081 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-scripts\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.429455 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.433368 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4kr\" (UniqueName: \"kubernetes.io/projected/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-kube-api-access-zk4kr\") pod \"ceilometer-0\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.497149 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:37 crc kubenswrapper[4957]: I1128 21:14:37.969403 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:38 crc kubenswrapper[4957]: I1128 21:14:38.802188 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerStarted","Data":"3e89699f27ee0b3477d65826d8002b9a9f4066289e6268e29c482f8f8cc42e14"} Nov 28 21:14:38 crc kubenswrapper[4957]: I1128 21:14:38.802494 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerStarted","Data":"0bf64af060228b3a63f2a18a58b4fa203986ab1eae7ff7d98595d24d7341a715"} Nov 28 21:14:38 crc kubenswrapper[4957]: I1128 21:14:38.827453 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d6aae3-d098-4f75-8335-f86d900a41ce" path="/var/lib/kubelet/pods/69d6aae3-d098-4f75-8335-f86d900a41ce/volumes" Nov 28 21:14:39 crc kubenswrapper[4957]: I1128 21:14:39.815733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerStarted","Data":"41440d1007dfa4cb0447b904fcd916b9db4df338a1935b03185a0fbbe8632b22"} Nov 28 21:14:40 crc kubenswrapper[4957]: I1128 21:14:40.831063 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerStarted","Data":"eb158670947b4051a92e670ee0d0a0f7a34ced85fcf6af4deb0d269449078b24"} Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.544909 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-7skhk"] Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.555596 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-7skhk"] Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.639050 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-j52q4"] Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.640549 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.657072 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-j52q4"] Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.743045 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-combined-ca-bundle\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.743127 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvpj\" (UniqueName: \"kubernetes.io/projected/6f3d0cd5-b463-4340-9c00-7d226bec612a-kube-api-access-vwvpj\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.743272 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-config-data\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.835310 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bcfaba7-030f-4415-b1c0-79820941039b" path="/var/lib/kubelet/pods/8bcfaba7-030f-4415-b1c0-79820941039b/volumes" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.845152 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvpj\" (UniqueName: \"kubernetes.io/projected/6f3d0cd5-b463-4340-9c00-7d226bec612a-kube-api-access-vwvpj\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.845353 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-config-data\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.845441 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-combined-ca-bundle\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.850849 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-combined-ca-bundle\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.850951 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-config-data\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.852103 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerStarted","Data":"1bcd5097c6f47fe994ecb88bb6352a7e7d6b5d17b63edf53c4fa36608c482217"} Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.853080 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.882566 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvpj\" (UniqueName: \"kubernetes.io/projected/6f3d0cd5-b463-4340-9c00-7d226bec612a-kube-api-access-vwvpj\") pod \"heat-db-sync-j52q4\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.895824 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.121737387 podStartE2EDuration="5.895803552s" podCreationTimestamp="2025-11-28 21:14:37 +0000 UTC" firstStartedPulling="2025-11-28 21:14:37.979218622 +0000 UTC m=+1517.447866531" lastFinishedPulling="2025-11-28 21:14:41.753284787 +0000 UTC m=+1521.221932696" observedRunningTime="2025-11-28 21:14:42.886925763 +0000 UTC m=+1522.355573682" watchObservedRunningTime="2025-11-28 21:14:42.895803552 +0000 UTC m=+1522.364451461" Nov 28 21:14:42 crc kubenswrapper[4957]: I1128 21:14:42.962650 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j52q4" Nov 28 21:14:43 crc kubenswrapper[4957]: I1128 21:14:43.573446 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-j52q4"] Nov 28 21:14:43 crc kubenswrapper[4957]: I1128 21:14:43.882334 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j52q4" event={"ID":"6f3d0cd5-b463-4340-9c00-7d226bec612a","Type":"ContainerStarted","Data":"1081b61d8d274e331d22eb59904d263aa9f20f84ec2057b079607cd75d3a306a"} Nov 28 21:14:44 crc kubenswrapper[4957]: I1128 21:14:44.519057 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 21:14:44 crc kubenswrapper[4957]: I1128 21:14:44.742786 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.255799 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.256254 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-central-agent" containerID="cri-o://3e89699f27ee0b3477d65826d8002b9a9f4066289e6268e29c482f8f8cc42e14" gracePeriod=30 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.256860 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="proxy-httpd" containerID="cri-o://1bcd5097c6f47fe994ecb88bb6352a7e7d6b5d17b63edf53c4fa36608c482217" gracePeriod=30 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.256931 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="sg-core" containerID="cri-o://eb158670947b4051a92e670ee0d0a0f7a34ced85fcf6af4deb0d269449078b24" gracePeriod=30 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.256985 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-notification-agent" containerID="cri-o://41440d1007dfa4cb0447b904fcd916b9db4df338a1935b03185a0fbbe8632b22" gracePeriod=30 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.921280 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923044 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerID="1bcd5097c6f47fe994ecb88bb6352a7e7d6b5d17b63edf53c4fa36608c482217" exitCode=0 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923067 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerID="eb158670947b4051a92e670ee0d0a0f7a34ced85fcf6af4deb0d269449078b24" exitCode=2 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923074 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerID="41440d1007dfa4cb0447b904fcd916b9db4df338a1935b03185a0fbbe8632b22" exitCode=0 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923080 4957 generic.go:334] "Generic (PLEG): container finished" podID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerID="3e89699f27ee0b3477d65826d8002b9a9f4066289e6268e29c482f8f8cc42e14" exitCode=0 Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923100 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerDied","Data":"1bcd5097c6f47fe994ecb88bb6352a7e7d6b5d17b63edf53c4fa36608c482217"} Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923124 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerDied","Data":"eb158670947b4051a92e670ee0d0a0f7a34ced85fcf6af4deb0d269449078b24"} Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923135 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerDied","Data":"41440d1007dfa4cb0447b904fcd916b9db4df338a1935b03185a0fbbe8632b22"} Nov 28 21:14:45 crc kubenswrapper[4957]: I1128 21:14:45.923144 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerDied","Data":"3e89699f27ee0b3477d65826d8002b9a9f4066289e6268e29c482f8f8cc42e14"} Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.540607 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.584641 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-sg-core-conf-yaml\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.584710 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-run-httpd\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.584764 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk4kr\" (UniqueName: \"kubernetes.io/projected/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-kube-api-access-zk4kr\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.584794 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-log-httpd\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.584827 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-config-data\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.584902 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-ceilometer-tls-certs\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.585072 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-combined-ca-bundle\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.585119 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-scripts\") pod \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\" (UID: \"bd04e3e7-6fa3-48e9-838f-56571dd69d8d\") " Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.587929 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.589531 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.597191 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-scripts" (OuterVolumeSpecName: "scripts") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.620534 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-kube-api-access-zk4kr" (OuterVolumeSpecName: "kube-api-access-zk4kr") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "kube-api-access-zk4kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.636055 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.689497 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.689864 4957 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.689877 4957 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.689890 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk4kr\" (UniqueName: \"kubernetes.io/projected/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-kube-api-access-zk4kr\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.689902 4957 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.717577 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.791136 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-config-data" (OuterVolumeSpecName: "config-data") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.793069 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.793092 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.804389 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd04e3e7-6fa3-48e9-838f-56571dd69d8d" (UID: "bd04e3e7-6fa3-48e9-838f-56571dd69d8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.895031 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd04e3e7-6fa3-48e9-838f-56571dd69d8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.940477 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd04e3e7-6fa3-48e9-838f-56571dd69d8d","Type":"ContainerDied","Data":"0bf64af060228b3a63f2a18a58b4fa203986ab1eae7ff7d98595d24d7341a715"} Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.940536 4957 scope.go:117] "RemoveContainer" containerID="1bcd5097c6f47fe994ecb88bb6352a7e7d6b5d17b63edf53c4fa36608c482217" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.940541 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.974127 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:46 crc kubenswrapper[4957]: I1128 21:14:46.996844 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.010737 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:47 crc kubenswrapper[4957]: E1128 21:14:47.011309 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="proxy-httpd" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011326 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="proxy-httpd" Nov 28 21:14:47 crc kubenswrapper[4957]: E1128 21:14:47.011354 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="sg-core" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011361 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="sg-core" Nov 28 21:14:47 crc kubenswrapper[4957]: E1128 21:14:47.011391 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-central-agent" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011398 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-central-agent" Nov 28 21:14:47 crc kubenswrapper[4957]: E1128 21:14:47.011414 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-notification-agent" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011421 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-notification-agent" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011631 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-notification-agent" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011646 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="sg-core" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011655 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="ceilometer-central-agent" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.011668 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" containerName="proxy-httpd" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.013677 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.016038 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.016371 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.016531 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.030510 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.039464 4957 scope.go:117] "RemoveContainer" containerID="eb158670947b4051a92e670ee0d0a0f7a34ced85fcf6af4deb0d269449078b24" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100274 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-run-httpd\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100326 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-log-httpd\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100435 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-config-data\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100593 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100639 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100753 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5765x\" (UniqueName: \"kubernetes.io/projected/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-kube-api-access-5765x\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.100848 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-scripts\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.116339 4957 scope.go:117] "RemoveContainer" containerID="41440d1007dfa4cb0447b904fcd916b9db4df338a1935b03185a0fbbe8632b22" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.162004 4957 scope.go:117] "RemoveContainer" containerID="3e89699f27ee0b3477d65826d8002b9a9f4066289e6268e29c482f8f8cc42e14" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203013 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203060 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5765x\" (UniqueName: \"kubernetes.io/projected/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-kube-api-access-5765x\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203145 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-scripts\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203265 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-run-httpd\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203286 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-log-httpd\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203330 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-config-data\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203406 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.203431 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.204292 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-log-httpd\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.205442 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-run-httpd\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.208504 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-scripts\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.208942 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.208961 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-config-data\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.210648 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.211237 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.225641 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5765x\" (UniqueName: \"kubernetes.io/projected/01b3ddaf-137b-49d1-9d77-0fa9eee151bd-kube-api-access-5765x\") pod \"ceilometer-0\" (UID: \"01b3ddaf-137b-49d1-9d77-0fa9eee151bd\") " pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.340818 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 21:14:47 crc kubenswrapper[4957]: I1128 21:14:47.937975 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 21:14:47 crc kubenswrapper[4957]: W1128 21:14:47.957051 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b3ddaf_137b_49d1_9d77_0fa9eee151bd.slice/crio-0218b4a06edf5cc6663924108e099418fc264edae5758c227b1cd025132e8859 WatchSource:0}: Error finding container 0218b4a06edf5cc6663924108e099418fc264edae5758c227b1cd025132e8859: Status 404 returned error can't find the container with id 0218b4a06edf5cc6663924108e099418fc264edae5758c227b1cd025132e8859 Nov 28 21:14:48 crc kubenswrapper[4957]: I1128 21:14:48.828012 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd04e3e7-6fa3-48e9-838f-56571dd69d8d" path="/var/lib/kubelet/pods/bd04e3e7-6fa3-48e9-838f-56571dd69d8d/volumes" Nov 28 21:14:48 crc kubenswrapper[4957]: I1128 21:14:48.974676 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01b3ddaf-137b-49d1-9d77-0fa9eee151bd","Type":"ContainerStarted","Data":"0218b4a06edf5cc6663924108e099418fc264edae5758c227b1cd025132e8859"} Nov 28 21:14:49 crc kubenswrapper[4957]: I1128 21:14:49.641591 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="rabbitmq" containerID="cri-o://129144dcbc530850a176fbfb46943deab971ce1b88efa897a9f9406693c41cb3" gracePeriod=604796 Nov 28 21:14:50 crc kubenswrapper[4957]: I1128 21:14:50.705146 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="rabbitmq" containerID="cri-o://d5952c73bcd1d128b1c2ae5639b7094369a74077616a79fa343219238524b3d1" gracePeriod=604796 Nov 28 21:14:57 crc kubenswrapper[4957]: I1128 21:14:57.103918 4957 generic.go:334] "Generic (PLEG): container finished" podID="752b9e43-44cd-4526-8393-6ae735497707" containerID="d5952c73bcd1d128b1c2ae5639b7094369a74077616a79fa343219238524b3d1" exitCode=0 Nov 28 21:14:57 crc kubenswrapper[4957]: I1128 21:14:57.104011 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"752b9e43-44cd-4526-8393-6ae735497707","Type":"ContainerDied","Data":"d5952c73bcd1d128b1c2ae5639b7094369a74077616a79fa343219238524b3d1"} Nov 28 21:14:58 crc kubenswrapper[4957]: I1128 21:14:58.595526 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.275760 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-r8gj6"] Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.278634 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.283142 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.297305 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-r8gj6"] Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.454803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.455185 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.455256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgts\" (UniqueName: \"kubernetes.io/projected/f0caaf41-287f-4108-8226-5da957a9ec51-kube-api-access-jtgts\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.455360 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.455376 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.455402 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.455624 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-config\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.558971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.559108 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtgts\" (UniqueName: \"kubernetes.io/projected/f0caaf41-287f-4108-8226-5da957a9ec51-kube-api-access-jtgts\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.559383 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.559424 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.559481 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.559574 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-config\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.559719 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.561029 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.561044 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.561737 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.562055 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.562145 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-config\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.563320 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.593180 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtgts\" (UniqueName: \"kubernetes.io/projected/f0caaf41-287f-4108-8226-5da957a9ec51-kube-api-access-jtgts\") pod \"dnsmasq-dns-7d84b4d45c-r8gj6\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:14:59 crc kubenswrapper[4957]: I1128 21:14:59.615643 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.140400 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98"] Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.143485 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.152162 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.154154 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.154884 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98"] Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.190624 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kt5d\" (UniqueName: \"kubernetes.io/projected/877be3f7-c2ac-4682-87ca-10538b1a5973-kube-api-access-8kt5d\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.191063 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/877be3f7-c2ac-4682-87ca-10538b1a5973-config-volume\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.191135 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/877be3f7-c2ac-4682-87ca-10538b1a5973-secret-volume\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.293384 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/877be3f7-c2ac-4682-87ca-10538b1a5973-config-volume\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.293705 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/877be3f7-c2ac-4682-87ca-10538b1a5973-secret-volume\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.293854 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kt5d\" (UniqueName: \"kubernetes.io/projected/877be3f7-c2ac-4682-87ca-10538b1a5973-kube-api-access-8kt5d\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.295031 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/877be3f7-c2ac-4682-87ca-10538b1a5973-config-volume\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.299821 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/877be3f7-c2ac-4682-87ca-10538b1a5973-secret-volume\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.320511 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kt5d\" (UniqueName: \"kubernetes.io/projected/877be3f7-c2ac-4682-87ca-10538b1a5973-kube-api-access-8kt5d\") pod \"collect-profiles-29406075-pnn98\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:00 crc kubenswrapper[4957]: I1128 21:15:00.475560 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:01 crc kubenswrapper[4957]: I1128 21:15:01.177605 4957 generic.go:334] "Generic (PLEG): container finished" podID="396562bc-990c-4874-894c-e553f8b3dae7" containerID="129144dcbc530850a176fbfb46943deab971ce1b88efa897a9f9406693c41cb3" exitCode=0 Nov 28 21:15:01 crc kubenswrapper[4957]: I1128 21:15:01.177647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396562bc-990c-4874-894c-e553f8b3dae7","Type":"ContainerDied","Data":"129144dcbc530850a176fbfb46943deab971ce1b88efa897a9f9406693c41cb3"} Nov 28 21:15:03 crc kubenswrapper[4957]: I1128 21:15:03.874853 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: i/o timeout" Nov 28 21:15:06 crc kubenswrapper[4957]: E1128 21:15:06.594970 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 28 21:15:06 crc kubenswrapper[4957]: E1128 21:15:06.595485 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 28 21:15:06 crc kubenswrapper[4957]: E1128 21:15:06.595601 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwvpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-j52q4_openstack(6f3d0cd5-b463-4340-9c00-7d226bec612a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:15:06 crc kubenswrapper[4957]: E1128 21:15:06.596807 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-j52q4" podUID="6f3d0cd5-b463-4340-9c00-7d226bec612a" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.697521 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779196 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779284 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-erlang-cookie\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779342 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/752b9e43-44cd-4526-8393-6ae735497707-erlang-cookie-secret\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779367 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-tls\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779409 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zbnd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-kube-api-access-4zbnd\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779526 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-plugins\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779642 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-server-conf\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779730 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-config-data\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779796 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-plugins-conf\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779822 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-confd\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.779875 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/752b9e43-44cd-4526-8393-6ae735497707-pod-info\") pod \"752b9e43-44cd-4526-8393-6ae735497707\" (UID: \"752b9e43-44cd-4526-8393-6ae735497707\") " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.783584 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.784856 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.786805 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.802974 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-kube-api-access-4zbnd" (OuterVolumeSpecName: "kube-api-access-4zbnd") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "kube-api-access-4zbnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.806602 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.808129 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/752b9e43-44cd-4526-8393-6ae735497707-pod-info" (OuterVolumeSpecName: "pod-info") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.827142 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.859686 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/752b9e43-44cd-4526-8393-6ae735497707-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885469 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885510 4957 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885524 4957 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/752b9e43-44cd-4526-8393-6ae735497707-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885551 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885567 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885583 4957 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/752b9e43-44cd-4526-8393-6ae735497707-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885595 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.885608 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zbnd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-kube-api-access-4zbnd\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.898832 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-config-data" (OuterVolumeSpecName: "config-data") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.926553 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-server-conf" (OuterVolumeSpecName: "server-conf") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.931482 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.988271 4957 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.988326 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/752b9e43-44cd-4526-8393-6ae735497707-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:06 crc kubenswrapper[4957]: I1128 21:15:06.988341 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.027151 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "752b9e43-44cd-4526-8393-6ae735497707" (UID: "752b9e43-44cd-4526-8393-6ae735497707"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.090628 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/752b9e43-44cd-4526-8393-6ae735497707-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.262165 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"752b9e43-44cd-4526-8393-6ae735497707","Type":"ContainerDied","Data":"aff117b965566ad9b7954c5c15b9c8d59efdc1a05429b4c0c417a658edbbba14"} Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.262232 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.262271 4957 scope.go:117] "RemoveContainer" containerID="d5952c73bcd1d128b1c2ae5639b7094369a74077616a79fa343219238524b3d1" Nov 28 21:15:07 crc kubenswrapper[4957]: E1128 21:15:07.266327 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-j52q4" podUID="6f3d0cd5-b463-4340-9c00-7d226bec612a" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.321149 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.347594 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.369477 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:15:07 crc kubenswrapper[4957]: E1128 21:15:07.370030 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="rabbitmq" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.370047 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="rabbitmq" Nov 28 21:15:07 crc kubenswrapper[4957]: E1128 21:15:07.370075 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="setup-container" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.370081 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="setup-container" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.370363 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="752b9e43-44cd-4526-8393-6ae735497707" containerName="rabbitmq" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.371714 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.373610 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.374871 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.374922 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cr7k9" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.374984 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.375262 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.375282 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.377115 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 21:15:07 crc kubenswrapper[4957]: E1128 21:15:07.381572 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 28 21:15:07 crc kubenswrapper[4957]: E1128 21:15:07.381611 4957 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 28 21:15:07 crc kubenswrapper[4957]: E1128 21:15:07.381749 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd9h656h7ch5f9hf8h56bh5d6h5c4h546h67bh5fdh557h549h57h698hb9h686h5fh598h555h697hd8hfdh55fh64dh56chdch578h656h5fbh68bh64q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5765x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(01b3ddaf-137b-49d1-9d77-0fa9eee151bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.386752 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.395007 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.396557 4957 scope.go:117] "RemoveContainer" containerID="dda5c3dd1b7579795754e61402f95bb87a0f38f6ecd48a42588d0ac3e01952f8" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.514459 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396562bc-990c-4874-894c-e553f8b3dae7-erlang-cookie-secret\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.514789 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-server-conf\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.514832 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.514862 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-plugins\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.514991 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-tls\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515041 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396562bc-990c-4874-894c-e553f8b3dae7-pod-info\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515083 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2h5\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-kube-api-access-cm2h5\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515126 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-plugins-conf\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515234 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-config-data\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515256 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-confd\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515321 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-erlang-cookie\") pod \"396562bc-990c-4874-894c-e553f8b3dae7\" (UID: \"396562bc-990c-4874-894c-e553f8b3dae7\") " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515795 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b6a2345-f928-41e0-bb0d-efd6ca576e42-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515862 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515929 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.515986 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516004 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516024 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf4sj\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-kube-api-access-jf4sj\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516091 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516120 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516135 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b6a2345-f928-41e0-bb0d-efd6ca576e42-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516158 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516180 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.516738 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.518581 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.519086 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.520411 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.521873 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/396562bc-990c-4874-894c-e553f8b3dae7-pod-info" (OuterVolumeSpecName: "pod-info") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.539749 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-kube-api-access-cm2h5" (OuterVolumeSpecName: "kube-api-access-cm2h5") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "kube-api-access-cm2h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.539477 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/396562bc-990c-4874-894c-e553f8b3dae7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.541170 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.568439 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-config-data" (OuterVolumeSpecName: "config-data") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.621523 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-server-conf" (OuterVolumeSpecName: "server-conf") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.623134 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b6a2345-f928-41e0-bb0d-efd6ca576e42-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.624204 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.623200 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632624 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632760 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632805 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf4sj\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-kube-api-access-jf4sj\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632927 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632966 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.632981 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b6a2345-f928-41e0-bb0d-efd6ca576e42-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633006 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633038 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633225 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633239 4957 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396562bc-990c-4874-894c-e553f8b3dae7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633248 4957 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633267 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633279 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633289 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633299 4957 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396562bc-990c-4874-894c-e553f8b3dae7-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633307 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2h5\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-kube-api-access-cm2h5\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633317 4957 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.633327 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396562bc-990c-4874-894c-e553f8b3dae7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.640426 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.640641 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.661053 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.661909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.666943 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b6a2345-f928-41e0-bb0d-efd6ca576e42-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.676766 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.680113 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b6a2345-f928-41e0-bb0d-efd6ca576e42-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.684594 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf4sj\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-kube-api-access-jf4sj\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.689986 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b6a2345-f928-41e0-bb0d-efd6ca576e42-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.692038 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.695934 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b6a2345-f928-41e0-bb0d-efd6ca576e42-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.735384 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.760763 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b6a2345-f928-41e0-bb0d-efd6ca576e42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.795337 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "396562bc-990c-4874-894c-e553f8b3dae7" (UID: "396562bc-990c-4874-894c-e553f8b3dae7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:07 crc kubenswrapper[4957]: I1128 21:15:07.839043 4957 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396562bc-990c-4874-894c-e553f8b3dae7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.022586 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: W1128 21:15:08.064404 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877be3f7_c2ac_4682_87ca_10538b1a5973.slice/crio-839dc8394000949f940ca69612aef4c5a7b2ff10ea01a72dced2757fa0b14b9a WatchSource:0}: Error finding container 839dc8394000949f940ca69612aef4c5a7b2ff10ea01a72dced2757fa0b14b9a: Status 404 returned error can't find the container with id 839dc8394000949f940ca69612aef4c5a7b2ff10ea01a72dced2757fa0b14b9a Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.070433 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98"] Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.191610 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-r8gj6"] Nov 28 21:15:08 crc kubenswrapper[4957]: W1128 21:15:08.208379 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0caaf41_287f_4108_8226_5da957a9ec51.slice/crio-57818ae9c5e38793c3c928b00cb3d39c147ee487f9e9c723b1b8612b073a6c62 WatchSource:0}: Error finding container 57818ae9c5e38793c3c928b00cb3d39c147ee487f9e9c723b1b8612b073a6c62: Status 404 returned error can't find the container with id 57818ae9c5e38793c3c928b00cb3d39c147ee487f9e9c723b1b8612b073a6c62 Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.375123 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" event={"ID":"877be3f7-c2ac-4682-87ca-10538b1a5973","Type":"ContainerStarted","Data":"839dc8394000949f940ca69612aef4c5a7b2ff10ea01a72dced2757fa0b14b9a"} Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.409510 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" event={"ID":"f0caaf41-287f-4108-8226-5da957a9ec51","Type":"ContainerStarted","Data":"57818ae9c5e38793c3c928b00cb3d39c147ee487f9e9c723b1b8612b073a6c62"} Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.435556 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396562bc-990c-4874-894c-e553f8b3dae7","Type":"ContainerDied","Data":"711e0236c3e879c1850a2f9915f932f923f54bf8a0dd71589341ae8c4e4f5c2e"} Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.435609 4957 scope.go:117] "RemoveContainer" containerID="129144dcbc530850a176fbfb46943deab971ce1b88efa897a9f9406693c41cb3" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.435773 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.510858 4957 scope.go:117] "RemoveContainer" containerID="effcb0c3cd0c8a3dfd159e80c73618f8f4a23a27ca559dd7532e8f835c678840" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.518839 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.555694 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.583324 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:15:08 crc kubenswrapper[4957]: E1128 21:15:08.583939 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="setup-container" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.583956 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="setup-container" Nov 28 21:15:08 crc kubenswrapper[4957]: E1128 21:15:08.584016 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="rabbitmq" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.584022 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="rabbitmq" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.584273 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="396562bc-990c-4874-894c-e553f8b3dae7" containerName="rabbitmq" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.585636 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.587813 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.604752 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.604885 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2szl" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.604981 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.605087 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.605142 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.605223 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.604756 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668229 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668334 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668356 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-config-data\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668377 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668467 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668495 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39bd199d-d600-4b4a-9d31-831e346ea98d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668560 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrp2\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-kube-api-access-bvrp2\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668602 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39bd199d-d600-4b4a-9d31-831e346ea98d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668785 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.668957 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.689194 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.770893 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.770953 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.771027 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.772048 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.772102 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-config-data\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.772128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.772187 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.772258 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39bd199d-d600-4b4a-9d31-831e346ea98d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.772939 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.773063 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.773524 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-config-data\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.773821 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.773860 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrp2\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-kube-api-access-bvrp2\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.773913 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39bd199d-d600-4b4a-9d31-831e346ea98d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.773957 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.774256 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.774946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39bd199d-d600-4b4a-9d31-831e346ea98d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.775988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39bd199d-d600-4b4a-9d31-831e346ea98d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.776552 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.777561 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39bd199d-d600-4b4a-9d31-831e346ea98d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.778194 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.790993 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrp2\" (UniqueName: \"kubernetes.io/projected/39bd199d-d600-4b4a-9d31-831e346ea98d-kube-api-access-bvrp2\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.832598 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"39bd199d-d600-4b4a-9d31-831e346ea98d\") " pod="openstack/rabbitmq-server-0" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.847673 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396562bc-990c-4874-894c-e553f8b3dae7" path="/var/lib/kubelet/pods/396562bc-990c-4874-894c-e553f8b3dae7/volumes" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.850390 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="752b9e43-44cd-4526-8393-6ae735497707" path="/var/lib/kubelet/pods/752b9e43-44cd-4526-8393-6ae735497707/volumes" Nov 28 21:15:08 crc kubenswrapper[4957]: I1128 21:15:08.980716 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.478133 4957 generic.go:334] "Generic (PLEG): container finished" podID="877be3f7-c2ac-4682-87ca-10538b1a5973" containerID="daf13cfafed9c25376dd2f3de21d6d5bf0d62e72936f3fed272627a6b77294a4" exitCode=0 Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.478518 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" event={"ID":"877be3f7-c2ac-4682-87ca-10538b1a5973","Type":"ContainerDied","Data":"daf13cfafed9c25376dd2f3de21d6d5bf0d62e72936f3fed272627a6b77294a4"} Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.481485 4957 generic.go:334] "Generic (PLEG): container finished" podID="f0caaf41-287f-4108-8226-5da957a9ec51" containerID="4a531d8b8cfcac8cbf13f3457b4fd33bff6d769233418952218c30ede550134c" exitCode=0 Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.481619 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" event={"ID":"f0caaf41-287f-4108-8226-5da957a9ec51","Type":"ContainerDied","Data":"4a531d8b8cfcac8cbf13f3457b4fd33bff6d769233418952218c30ede550134c"} Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.483933 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b6a2345-f928-41e0-bb0d-efd6ca576e42","Type":"ContainerStarted","Data":"37170f79ecc6c3293e8020d158d9076b0f6d3852d1ca12832a4522a4090a74e6"} Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.486233 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01b3ddaf-137b-49d1-9d77-0fa9eee151bd","Type":"ContainerStarted","Data":"91cae349ae8f253e1edd6b54e7f752941ded7695b992c0bdc50171567bb1913e"} Nov 28 21:15:09 crc kubenswrapper[4957]: W1128 21:15:09.553431 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39bd199d_d600_4b4a_9d31_831e346ea98d.slice/crio-1af6f1b68935ea43e16ebf64fa35e046b065ec3dd930b654ad9f8ce43166ffdc WatchSource:0}: Error finding container 1af6f1b68935ea43e16ebf64fa35e046b065ec3dd930b654ad9f8ce43166ffdc: Status 404 returned error can't find the container with id 1af6f1b68935ea43e16ebf64fa35e046b065ec3dd930b654ad9f8ce43166ffdc Nov 28 21:15:09 crc kubenswrapper[4957]: I1128 21:15:09.560466 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 21:15:10 crc kubenswrapper[4957]: I1128 21:15:10.500630 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" event={"ID":"f0caaf41-287f-4108-8226-5da957a9ec51","Type":"ContainerStarted","Data":"6e3089af0e2abdb0049d0f8127ab3598abd15fa7d9092c102728b3783dad66f7"} Nov 28 21:15:10 crc kubenswrapper[4957]: I1128 21:15:10.501691 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"39bd199d-d600-4b4a-9d31-831e346ea98d","Type":"ContainerStarted","Data":"1af6f1b68935ea43e16ebf64fa35e046b065ec3dd930b654ad9f8ce43166ffdc"} Nov 28 21:15:10 crc kubenswrapper[4957]: I1128 21:15:10.501725 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:15:10 crc kubenswrapper[4957]: I1128 21:15:10.503157 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01b3ddaf-137b-49d1-9d77-0fa9eee151bd","Type":"ContainerStarted","Data":"ee535d6b67d61bc706976f19f7d9e34d9ac06b3bf3525aac7569ab945c38a29a"} Nov 28 21:15:10 crc kubenswrapper[4957]: I1128 21:15:10.527764 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" podStartSLOduration=11.527745844 podStartE2EDuration="11.527745844s" podCreationTimestamp="2025-11-28 21:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:15:10.523343486 +0000 UTC m=+1549.991991415" watchObservedRunningTime="2025-11-28 21:15:10.527745844 +0000 UTC m=+1549.996393753" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.022346 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.146753 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/877be3f7-c2ac-4682-87ca-10538b1a5973-secret-volume\") pod \"877be3f7-c2ac-4682-87ca-10538b1a5973\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.146882 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/877be3f7-c2ac-4682-87ca-10538b1a5973-config-volume\") pod \"877be3f7-c2ac-4682-87ca-10538b1a5973\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.146919 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kt5d\" (UniqueName: \"kubernetes.io/projected/877be3f7-c2ac-4682-87ca-10538b1a5973-kube-api-access-8kt5d\") pod \"877be3f7-c2ac-4682-87ca-10538b1a5973\" (UID: \"877be3f7-c2ac-4682-87ca-10538b1a5973\") " Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.147780 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877be3f7-c2ac-4682-87ca-10538b1a5973-config-volume" (OuterVolumeSpecName: "config-volume") pod "877be3f7-c2ac-4682-87ca-10538b1a5973" (UID: "877be3f7-c2ac-4682-87ca-10538b1a5973"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.186336 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877be3f7-c2ac-4682-87ca-10538b1a5973-kube-api-access-8kt5d" (OuterVolumeSpecName: "kube-api-access-8kt5d") pod "877be3f7-c2ac-4682-87ca-10538b1a5973" (UID: "877be3f7-c2ac-4682-87ca-10538b1a5973"). InnerVolumeSpecName "kube-api-access-8kt5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.187491 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877be3f7-c2ac-4682-87ca-10538b1a5973-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "877be3f7-c2ac-4682-87ca-10538b1a5973" (UID: "877be3f7-c2ac-4682-87ca-10538b1a5973"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.250087 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/877be3f7-c2ac-4682-87ca-10538b1a5973-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.250119 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kt5d\" (UniqueName: \"kubernetes.io/projected/877be3f7-c2ac-4682-87ca-10538b1a5973-kube-api-access-8kt5d\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.250133 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/877be3f7-c2ac-4682-87ca-10538b1a5973-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.515561 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b6a2345-f928-41e0-bb0d-efd6ca576e42","Type":"ContainerStarted","Data":"cc1333f2f0a8671476417af29178a434e1fe88f6247529ff6f3a76835ff20677"} Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.517053 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.517108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98" event={"ID":"877be3f7-c2ac-4682-87ca-10538b1a5973","Type":"ContainerDied","Data":"839dc8394000949f940ca69612aef4c5a7b2ff10ea01a72dced2757fa0b14b9a"} Nov 28 21:15:11 crc kubenswrapper[4957]: I1128 21:15:11.517139 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839dc8394000949f940ca69612aef4c5a7b2ff10ea01a72dced2757fa0b14b9a" Nov 28 21:15:11 crc kubenswrapper[4957]: E1128 21:15:11.890949 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="01b3ddaf-137b-49d1-9d77-0fa9eee151bd" Nov 28 21:15:12 crc kubenswrapper[4957]: I1128 21:15:12.528076 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01b3ddaf-137b-49d1-9d77-0fa9eee151bd","Type":"ContainerStarted","Data":"6aa778bcf7a8cc2853b95f0e0f256ad931c35f55214fc844c6bd86b3fa2b6f3a"} Nov 28 21:15:12 crc kubenswrapper[4957]: I1128 21:15:12.528495 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 21:15:12 crc kubenswrapper[4957]: I1128 21:15:12.529880 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"39bd199d-d600-4b4a-9d31-831e346ea98d","Type":"ContainerStarted","Data":"0bbac4e087045c279167806300d7ae6a37f69f78f1b4731105419e1c1184cd80"} Nov 28 21:15:12 crc kubenswrapper[4957]: E1128 21:15:12.530177 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="01b3ddaf-137b-49d1-9d77-0fa9eee151bd" Nov 28 21:15:13 crc kubenswrapper[4957]: E1128 21:15:13.544065 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="01b3ddaf-137b-49d1-9d77-0fa9eee151bd" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.611735 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j52q4" event={"ID":"6f3d0cd5-b463-4340-9c00-7d226bec612a","Type":"ContainerStarted","Data":"35e7822cf2ce6d67f867998ddf88931f7fdbde0393cab7975785b4f88a986e90"} Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.616519 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.654164 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-j52q4" podStartSLOduration=2.220410292 podStartE2EDuration="37.654141422s" podCreationTimestamp="2025-11-28 21:14:42 +0000 UTC" firstStartedPulling="2025-11-28 21:14:43.5683719 +0000 UTC m=+1523.037019809" lastFinishedPulling="2025-11-28 21:15:19.00210302 +0000 UTC m=+1558.470750939" observedRunningTime="2025-11-28 21:15:19.632312806 +0000 UTC m=+1559.100960735" watchObservedRunningTime="2025-11-28 21:15:19.654141422 +0000 UTC m=+1559.122789331" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.706857 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-pstbx"] Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.707252 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerName="dnsmasq-dns" containerID="cri-o://dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e" gracePeriod=10 Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.862853 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-g7j2p"] Nov 28 21:15:19 crc kubenswrapper[4957]: E1128 21:15:19.863391 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877be3f7-c2ac-4682-87ca-10538b1a5973" containerName="collect-profiles" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.863405 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="877be3f7-c2ac-4682-87ca-10538b1a5973" containerName="collect-profiles" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.863670 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="877be3f7-c2ac-4682-87ca-10538b1a5973" containerName="collect-profiles" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.864922 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.887511 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-g7j2p"] Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.983686 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.983763 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.984148 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-config\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.984338 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb5wk\" (UniqueName: \"kubernetes.io/projected/228bc1b2-f53c-47ca-9063-2630d3331c8b-kube-api-access-sb5wk\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.984491 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.984521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:19 crc kubenswrapper[4957]: I1128 21:15:19.984553 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.087973 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088441 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-config\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088500 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb5wk\" (UniqueName: \"kubernetes.io/projected/228bc1b2-f53c-47ca-9063-2630d3331c8b-kube-api-access-sb5wk\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088550 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088566 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.088929 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.089955 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-config\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.090506 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.091009 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.094519 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.094832 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/228bc1b2-f53c-47ca-9063-2630d3331c8b-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.119048 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb5wk\" (UniqueName: \"kubernetes.io/projected/228bc1b2-f53c-47ca-9063-2630d3331c8b-kube-api-access-sb5wk\") pod \"dnsmasq-dns-6f6df4f56c-g7j2p\" (UID: \"228bc1b2-f53c-47ca-9063-2630d3331c8b\") " pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.281946 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.416119 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.498111 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-nb\") pod \"49c11425-f89c-47b1-bc01-d25c62f2e36e\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.498254 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-sb\") pod \"49c11425-f89c-47b1-bc01-d25c62f2e36e\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.498295 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-swift-storage-0\") pod \"49c11425-f89c-47b1-bc01-d25c62f2e36e\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.619022 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-config\") pod \"49c11425-f89c-47b1-bc01-d25c62f2e36e\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.619114 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-svc\") pod \"49c11425-f89c-47b1-bc01-d25c62f2e36e\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.619544 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pwxp\" (UniqueName: \"kubernetes.io/projected/49c11425-f89c-47b1-bc01-d25c62f2e36e-kube-api-access-6pwxp\") pod \"49c11425-f89c-47b1-bc01-d25c62f2e36e\" (UID: \"49c11425-f89c-47b1-bc01-d25c62f2e36e\") " Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.651537 4957 generic.go:334] "Generic (PLEG): container finished" podID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerID="dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e" exitCode=0 Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.651587 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" event={"ID":"49c11425-f89c-47b1-bc01-d25c62f2e36e","Type":"ContainerDied","Data":"dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e"} Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.651616 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" event={"ID":"49c11425-f89c-47b1-bc01-d25c62f2e36e","Type":"ContainerDied","Data":"1eb97b058360f4e198c2f98e26d15bc2f779c8a5c394ef89bc911496122197b2"} Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.651634 4957 scope.go:117] "RemoveContainer" containerID="dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.651805 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-pstbx" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.660512 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c11425-f89c-47b1-bc01-d25c62f2e36e-kube-api-access-6pwxp" (OuterVolumeSpecName: "kube-api-access-6pwxp") pod "49c11425-f89c-47b1-bc01-d25c62f2e36e" (UID: "49c11425-f89c-47b1-bc01-d25c62f2e36e"). InnerVolumeSpecName "kube-api-access-6pwxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.722700 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pwxp\" (UniqueName: \"kubernetes.io/projected/49c11425-f89c-47b1-bc01-d25c62f2e36e-kube-api-access-6pwxp\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.723665 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49c11425-f89c-47b1-bc01-d25c62f2e36e" (UID: "49c11425-f89c-47b1-bc01-d25c62f2e36e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.732743 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49c11425-f89c-47b1-bc01-d25c62f2e36e" (UID: "49c11425-f89c-47b1-bc01-d25c62f2e36e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.749484 4957 scope.go:117] "RemoveContainer" containerID="caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.773148 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49c11425-f89c-47b1-bc01-d25c62f2e36e" (UID: "49c11425-f89c-47b1-bc01-d25c62f2e36e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.818485 4957 scope.go:117] "RemoveContainer" containerID="dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e" Nov 28 21:15:20 crc kubenswrapper[4957]: E1128 21:15:20.824129 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e\": container with ID starting with dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e not found: ID does not exist" containerID="dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.824171 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e"} err="failed to get container status \"dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e\": rpc error: code = NotFound desc = could not find container \"dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e\": container with ID starting with dcb840140ed6a24b80b9b94414bde96be2dbe17b1eda32c97c6305f12aff013e not found: ID does not exist" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.824199 4957 scope.go:117] "RemoveContainer" containerID="caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8" Nov 28 21:15:20 crc kubenswrapper[4957]: E1128 21:15:20.824618 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8\": container with ID starting with caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8 not found: ID does not exist" containerID="caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.824640 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8"} err="failed to get container status \"caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8\": rpc error: code = NotFound desc = could not find container \"caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8\": container with ID starting with caafe02497d929a3549733d9f2ef84fbd721ae52177975e89d451c34a6f94ce8 not found: ID does not exist" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.826850 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.827056 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49c11425-f89c-47b1-bc01-d25c62f2e36e" (UID: "49c11425-f89c-47b1-bc01-d25c62f2e36e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.827627 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.828073 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.856759 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-config" (OuterVolumeSpecName: "config") pod "49c11425-f89c-47b1-bc01-d25c62f2e36e" (UID: "49c11425-f89c-47b1-bc01-d25c62f2e36e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.930286 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.930324 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c11425-f89c-47b1-bc01-d25c62f2e36e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.986841 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-pstbx"] Nov 28 21:15:20 crc kubenswrapper[4957]: I1128 21:15:20.999888 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-pstbx"] Nov 28 21:15:21 crc kubenswrapper[4957]: W1128 21:15:21.032333 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod228bc1b2_f53c_47ca_9063_2630d3331c8b.slice/crio-ae63b1914c45fc278121d1caa6733cec81e118c98917c1b73bcf77e1c163a35d WatchSource:0}: Error finding container ae63b1914c45fc278121d1caa6733cec81e118c98917c1b73bcf77e1c163a35d: Status 404 returned error can't find the container with id ae63b1914c45fc278121d1caa6733cec81e118c98917c1b73bcf77e1c163a35d Nov 28 21:15:21 crc kubenswrapper[4957]: I1128 21:15:21.035182 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-g7j2p"] Nov 28 21:15:21 crc kubenswrapper[4957]: I1128 21:15:21.664187 4957 generic.go:334] "Generic (PLEG): container finished" podID="228bc1b2-f53c-47ca-9063-2630d3331c8b" containerID="ddc9e4543eb0f11d275d6c8edda5fa4d9726a46e05315d403cde26038162d885" exitCode=0 Nov 28 21:15:21 crc kubenswrapper[4957]: I1128 21:15:21.664443 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" event={"ID":"228bc1b2-f53c-47ca-9063-2630d3331c8b","Type":"ContainerDied","Data":"ddc9e4543eb0f11d275d6c8edda5fa4d9726a46e05315d403cde26038162d885"} Nov 28 21:15:21 crc kubenswrapper[4957]: I1128 21:15:21.664533 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" event={"ID":"228bc1b2-f53c-47ca-9063-2630d3331c8b","Type":"ContainerStarted","Data":"ae63b1914c45fc278121d1caa6733cec81e118c98917c1b73bcf77e1c163a35d"} Nov 28 21:15:22 crc kubenswrapper[4957]: I1128 21:15:22.677197 4957 generic.go:334] "Generic (PLEG): container finished" podID="6f3d0cd5-b463-4340-9c00-7d226bec612a" containerID="35e7822cf2ce6d67f867998ddf88931f7fdbde0393cab7975785b4f88a986e90" exitCode=0 Nov 28 21:15:22 crc kubenswrapper[4957]: I1128 21:15:22.677256 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j52q4" event={"ID":"6f3d0cd5-b463-4340-9c00-7d226bec612a","Type":"ContainerDied","Data":"35e7822cf2ce6d67f867998ddf88931f7fdbde0393cab7975785b4f88a986e90"} Nov 28 21:15:22 crc kubenswrapper[4957]: I1128 21:15:22.680024 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" event={"ID":"228bc1b2-f53c-47ca-9063-2630d3331c8b","Type":"ContainerStarted","Data":"b376a5bfb169373a2d5fc3ee66ca9a11b8815f76a7a5d9e92c1687769ac40ec3"} Nov 28 21:15:22 crc kubenswrapper[4957]: I1128 21:15:22.680379 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:22 crc kubenswrapper[4957]: I1128 21:15:22.715648 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" podStartSLOduration=3.715630576 podStartE2EDuration="3.715630576s" podCreationTimestamp="2025-11-28 21:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:15:22.711277369 +0000 UTC m=+1562.179925278" watchObservedRunningTime="2025-11-28 21:15:22.715630576 +0000 UTC m=+1562.184278495" Nov 28 21:15:22 crc kubenswrapper[4957]: I1128 21:15:22.830218 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" path="/var/lib/kubelet/pods/49c11425-f89c-47b1-bc01-d25c62f2e36e/volumes" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.139211 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j52q4" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.310951 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwvpj\" (UniqueName: \"kubernetes.io/projected/6f3d0cd5-b463-4340-9c00-7d226bec612a-kube-api-access-vwvpj\") pod \"6f3d0cd5-b463-4340-9c00-7d226bec612a\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.311191 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-config-data\") pod \"6f3d0cd5-b463-4340-9c00-7d226bec612a\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.311281 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-combined-ca-bundle\") pod \"6f3d0cd5-b463-4340-9c00-7d226bec612a\" (UID: \"6f3d0cd5-b463-4340-9c00-7d226bec612a\") " Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.323678 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3d0cd5-b463-4340-9c00-7d226bec612a-kube-api-access-vwvpj" (OuterVolumeSpecName: "kube-api-access-vwvpj") pod "6f3d0cd5-b463-4340-9c00-7d226bec612a" (UID: "6f3d0cd5-b463-4340-9c00-7d226bec612a"). InnerVolumeSpecName "kube-api-access-vwvpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.346224 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f3d0cd5-b463-4340-9c00-7d226bec612a" (UID: "6f3d0cd5-b463-4340-9c00-7d226bec612a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.396980 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-config-data" (OuterVolumeSpecName: "config-data") pod "6f3d0cd5-b463-4340-9c00-7d226bec612a" (UID: "6f3d0cd5-b463-4340-9c00-7d226bec612a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.413787 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.413816 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d0cd5-b463-4340-9c00-7d226bec612a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.413827 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwvpj\" (UniqueName: \"kubernetes.io/projected/6f3d0cd5-b463-4340-9c00-7d226bec612a-kube-api-access-vwvpj\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.707523 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j52q4" event={"ID":"6f3d0cd5-b463-4340-9c00-7d226bec612a","Type":"ContainerDied","Data":"1081b61d8d274e331d22eb59904d263aa9f20f84ec2057b079607cd75d3a306a"} Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.707564 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j52q4" Nov 28 21:15:24 crc kubenswrapper[4957]: I1128 21:15:24.707565 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1081b61d8d274e331d22eb59904d263aa9f20f84ec2057b079607cd75d3a306a" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.677598 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8f7b695b5-9dcxn"] Nov 28 21:15:25 crc kubenswrapper[4957]: E1128 21:15:25.679067 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3d0cd5-b463-4340-9c00-7d226bec612a" containerName="heat-db-sync" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.679168 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3d0cd5-b463-4340-9c00-7d226bec612a" containerName="heat-db-sync" Nov 28 21:15:25 crc kubenswrapper[4957]: E1128 21:15:25.679270 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerName="init" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.679326 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerName="init" Nov 28 21:15:25 crc kubenswrapper[4957]: E1128 21:15:25.679409 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerName="dnsmasq-dns" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.679466 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerName="dnsmasq-dns" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.679747 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c11425-f89c-47b1-bc01-d25c62f2e36e" containerName="dnsmasq-dns" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.679813 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3d0cd5-b463-4340-9c00-7d226bec612a" containerName="heat-db-sync" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.680665 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.691303 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8f7b695b5-9dcxn"] Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.730535 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-58bdf58698-25xts"] Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.734349 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.767594 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-58bdf58698-25xts"] Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.824084 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-768b697649-7gz8m"] Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.830282 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.854509 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.854740 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-768b697649-7gz8m"] Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.856745 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5czz\" (UniqueName: \"kubernetes.io/projected/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-kube-api-access-j5czz\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.856803 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-config-data\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.856898 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-config-data\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.856962 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-config-data-custom\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.856990 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-combined-ca-bundle\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.859772 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-config-data-custom\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.859834 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvkhw\" (UniqueName: \"kubernetes.io/projected/eecb8bf2-f385-4670-a84c-611a1f373c8f-kube-api-access-tvkhw\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.859877 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-public-tls-certs\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.859963 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-combined-ca-bundle\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.860439 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-internal-tls-certs\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962469 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxk8j\" (UniqueName: \"kubernetes.io/projected/8fc58381-64db-46f6-9e97-93e8e4c45abe-kube-api-access-qxk8j\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-public-tls-certs\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962703 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-internal-tls-certs\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962755 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5czz\" (UniqueName: \"kubernetes.io/projected/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-kube-api-access-j5czz\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-config-data\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962862 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-config-data\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.962904 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-config-data-custom\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963066 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-config-data\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963105 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-config-data-custom\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963130 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-combined-ca-bundle\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963147 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-combined-ca-bundle\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963165 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-config-data-custom\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963197 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvkhw\" (UniqueName: \"kubernetes.io/projected/eecb8bf2-f385-4670-a84c-611a1f373c8f-kube-api-access-tvkhw\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963234 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-public-tls-certs\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963282 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-combined-ca-bundle\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.963305 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-internal-tls-certs\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.969654 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-config-data-custom\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.971431 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-config-data\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.975067 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-config-data-custom\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.975187 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-public-tls-certs\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.975470 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-config-data\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.977165 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-internal-tls-certs\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.980723 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eecb8bf2-f385-4670-a84c-611a1f373c8f-combined-ca-bundle\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.985231 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-combined-ca-bundle\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.990064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvkhw\" (UniqueName: \"kubernetes.io/projected/eecb8bf2-f385-4670-a84c-611a1f373c8f-kube-api-access-tvkhw\") pod \"heat-engine-8f7b695b5-9dcxn\" (UID: \"eecb8bf2-f385-4670-a84c-611a1f373c8f\") " pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:25 crc kubenswrapper[4957]: I1128 21:15:25.993316 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5czz\" (UniqueName: \"kubernetes.io/projected/25301b86-e61f-4a9e-90e2-b1f1e9c045dc-kube-api-access-j5czz\") pod \"heat-api-58bdf58698-25xts\" (UID: \"25301b86-e61f-4a9e-90e2-b1f1e9c045dc\") " pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.000720 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.066716 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-config-data-custom\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.067121 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-config-data\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.067169 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-combined-ca-bundle\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.067273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-internal-tls-certs\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.067314 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-public-tls-certs\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.067335 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxk8j\" (UniqueName: \"kubernetes.io/projected/8fc58381-64db-46f6-9e97-93e8e4c45abe-kube-api-access-qxk8j\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.072390 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-config-data-custom\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.077709 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-internal-tls-certs\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.078037 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-public-tls-certs\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.078727 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-config-data\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.079003 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc58381-64db-46f6-9e97-93e8e4c45abe-combined-ca-bundle\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.094825 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.096620 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxk8j\" (UniqueName: \"kubernetes.io/projected/8fc58381-64db-46f6-9e97-93e8e4c45abe-kube-api-access-qxk8j\") pod \"heat-cfnapi-768b697649-7gz8m\" (UID: \"8fc58381-64db-46f6-9e97-93e8e4c45abe\") " pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.173529 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.600015 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8f7b695b5-9dcxn"] Nov 28 21:15:26 crc kubenswrapper[4957]: W1128 21:15:26.602134 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeecb8bf2_f385_4670_a84c_611a1f373c8f.slice/crio-adb994120b162c90124f27c5e31881c7d83026d367489bf07cd2dafbd1ef009d WatchSource:0}: Error finding container adb994120b162c90124f27c5e31881c7d83026d367489bf07cd2dafbd1ef009d: Status 404 returned error can't find the container with id adb994120b162c90124f27c5e31881c7d83026d367489bf07cd2dafbd1ef009d Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.718718 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-58bdf58698-25xts"] Nov 28 21:15:26 crc kubenswrapper[4957]: W1128 21:15:26.723924 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25301b86_e61f_4a9e_90e2_b1f1e9c045dc.slice/crio-59a8526b7925936daeebf4668112ed46c6bda51124eac7ecec89049f30e2e39d WatchSource:0}: Error finding container 59a8526b7925936daeebf4668112ed46c6bda51124eac7ecec89049f30e2e39d: Status 404 returned error can't find the container with id 59a8526b7925936daeebf4668112ed46c6bda51124eac7ecec89049f30e2e39d Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.734786 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01b3ddaf-137b-49d1-9d77-0fa9eee151bd","Type":"ContainerStarted","Data":"c5bce60202fc5998dd2af2cf129ba8cbef5a9b673231c790d3ce10919c479a0d"} Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.736473 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8f7b695b5-9dcxn" event={"ID":"eecb8bf2-f385-4670-a84c-611a1f373c8f","Type":"ContainerStarted","Data":"adb994120b162c90124f27c5e31881c7d83026d367489bf07cd2dafbd1ef009d"} Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.765765 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.720152272 podStartE2EDuration="40.765739068s" podCreationTimestamp="2025-11-28 21:14:46 +0000 UTC" firstStartedPulling="2025-11-28 21:14:47.961569669 +0000 UTC m=+1527.430217578" lastFinishedPulling="2025-11-28 21:15:26.007156465 +0000 UTC m=+1565.475804374" observedRunningTime="2025-11-28 21:15:26.762532909 +0000 UTC m=+1566.231180818" watchObservedRunningTime="2025-11-28 21:15:26.765739068 +0000 UTC m=+1566.234386977" Nov 28 21:15:26 crc kubenswrapper[4957]: I1128 21:15:26.839744 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-768b697649-7gz8m"] Nov 28 21:15:26 crc kubenswrapper[4957]: W1128 21:15:26.842977 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fc58381_64db_46f6_9e97_93e8e4c45abe.slice/crio-cf92ef7b812fc39ea4c84835b8e2ba86714e47f29429bca8822a5625a4ed6bae WatchSource:0}: Error finding container cf92ef7b812fc39ea4c84835b8e2ba86714e47f29429bca8822a5625a4ed6bae: Status 404 returned error can't find the container with id cf92ef7b812fc39ea4c84835b8e2ba86714e47f29429bca8822a5625a4ed6bae Nov 28 21:15:27 crc kubenswrapper[4957]: I1128 21:15:27.754463 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58bdf58698-25xts" event={"ID":"25301b86-e61f-4a9e-90e2-b1f1e9c045dc","Type":"ContainerStarted","Data":"59a8526b7925936daeebf4668112ed46c6bda51124eac7ecec89049f30e2e39d"} Nov 28 21:15:27 crc kubenswrapper[4957]: I1128 21:15:27.757061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8f7b695b5-9dcxn" event={"ID":"eecb8bf2-f385-4670-a84c-611a1f373c8f","Type":"ContainerStarted","Data":"3a94465264688b81302a2a765aad20c81d74fc1dcae132c7bac613c9c7ed26c0"} Nov 28 21:15:27 crc kubenswrapper[4957]: I1128 21:15:27.758628 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:27 crc kubenswrapper[4957]: I1128 21:15:27.763158 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-768b697649-7gz8m" event={"ID":"8fc58381-64db-46f6-9e97-93e8e4c45abe","Type":"ContainerStarted","Data":"cf92ef7b812fc39ea4c84835b8e2ba86714e47f29429bca8822a5625a4ed6bae"} Nov 28 21:15:27 crc kubenswrapper[4957]: I1128 21:15:27.850581 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8f7b695b5-9dcxn" podStartSLOduration=2.850551094 podStartE2EDuration="2.850551094s" podCreationTimestamp="2025-11-28 21:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:15:27.832276974 +0000 UTC m=+1567.300924883" watchObservedRunningTime="2025-11-28 21:15:27.850551094 +0000 UTC m=+1567.319199003" Nov 28 21:15:29 crc kubenswrapper[4957]: I1128 21:15:29.786649 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-768b697649-7gz8m" event={"ID":"8fc58381-64db-46f6-9e97-93e8e4c45abe","Type":"ContainerStarted","Data":"44e7d263c32b500907c0b52a3a3363e50ba9c5641f9f1d7f26389235ec6e1e32"} Nov 28 21:15:29 crc kubenswrapper[4957]: I1128 21:15:29.787179 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:29 crc kubenswrapper[4957]: I1128 21:15:29.788990 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58bdf58698-25xts" event={"ID":"25301b86-e61f-4a9e-90e2-b1f1e9c045dc","Type":"ContainerStarted","Data":"eb11d9fec64a8154c4270493e21cab274949988306d1c3b8502c47c44042f819"} Nov 28 21:15:29 crc kubenswrapper[4957]: I1128 21:15:29.808930 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-768b697649-7gz8m" podStartSLOduration=2.890635279 podStartE2EDuration="4.80890831s" podCreationTimestamp="2025-11-28 21:15:25 +0000 UTC" firstStartedPulling="2025-11-28 21:15:26.850473381 +0000 UTC m=+1566.319121290" lastFinishedPulling="2025-11-28 21:15:28.768746412 +0000 UTC m=+1568.237394321" observedRunningTime="2025-11-28 21:15:29.80073766 +0000 UTC m=+1569.269385579" watchObservedRunningTime="2025-11-28 21:15:29.80890831 +0000 UTC m=+1569.277556219" Nov 28 21:15:29 crc kubenswrapper[4957]: I1128 21:15:29.820740 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-58bdf58698-25xts" podStartSLOduration=2.781438764 podStartE2EDuration="4.820717661s" podCreationTimestamp="2025-11-28 21:15:25 +0000 UTC" firstStartedPulling="2025-11-28 21:15:26.727177469 +0000 UTC m=+1566.195825378" lastFinishedPulling="2025-11-28 21:15:28.766456366 +0000 UTC m=+1568.235104275" observedRunningTime="2025-11-28 21:15:29.818056885 +0000 UTC m=+1569.286704804" watchObservedRunningTime="2025-11-28 21:15:29.820717661 +0000 UTC m=+1569.289365570" Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.285413 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-g7j2p" Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.399862 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-r8gj6"] Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.401409 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" containerName="dnsmasq-dns" containerID="cri-o://6e3089af0e2abdb0049d0f8127ab3598abd15fa7d9092c102728b3783dad66f7" gracePeriod=10 Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.808998 4957 generic.go:334] "Generic (PLEG): container finished" podID="f0caaf41-287f-4108-8226-5da957a9ec51" containerID="6e3089af0e2abdb0049d0f8127ab3598abd15fa7d9092c102728b3783dad66f7" exitCode=0 Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.809088 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" event={"ID":"f0caaf41-287f-4108-8226-5da957a9ec51","Type":"ContainerDied","Data":"6e3089af0e2abdb0049d0f8127ab3598abd15fa7d9092c102728b3783dad66f7"} Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.809329 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:30 crc kubenswrapper[4957]: I1128 21:15:30.993303 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.161967 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-svc\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.162050 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-config\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.162081 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-sb\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.162107 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-swift-storage-0\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.162329 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-openstack-edpm-ipam\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.162377 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtgts\" (UniqueName: \"kubernetes.io/projected/f0caaf41-287f-4108-8226-5da957a9ec51-kube-api-access-jtgts\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.162520 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-nb\") pod \"f0caaf41-287f-4108-8226-5da957a9ec51\" (UID: \"f0caaf41-287f-4108-8226-5da957a9ec51\") " Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.168188 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0caaf41-287f-4108-8226-5da957a9ec51-kube-api-access-jtgts" (OuterVolumeSpecName: "kube-api-access-jtgts") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "kube-api-access-jtgts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.221978 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.229517 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.229980 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.235981 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.238890 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.242098 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-config" (OuterVolumeSpecName: "config") pod "f0caaf41-287f-4108-8226-5da957a9ec51" (UID: "f0caaf41-287f-4108-8226-5da957a9ec51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265711 4957 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265795 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265806 4957 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265814 4957 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-config\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265826 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265836 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtgts\" (UniqueName: \"kubernetes.io/projected/f0caaf41-287f-4108-8226-5da957a9ec51-kube-api-access-jtgts\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.265844 4957 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0caaf41-287f-4108-8226-5da957a9ec51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.824106 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" event={"ID":"f0caaf41-287f-4108-8226-5da957a9ec51","Type":"ContainerDied","Data":"57818ae9c5e38793c3c928b00cb3d39c147ee487f9e9c723b1b8612b073a6c62"} Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.824142 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-r8gj6" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.824159 4957 scope.go:117] "RemoveContainer" containerID="6e3089af0e2abdb0049d0f8127ab3598abd15fa7d9092c102728b3783dad66f7" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.866272 4957 scope.go:117] "RemoveContainer" containerID="4a531d8b8cfcac8cbf13f3457b4fd33bff6d769233418952218c30ede550134c" Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.885707 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-r8gj6"] Nov 28 21:15:31 crc kubenswrapper[4957]: I1128 21:15:31.907884 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-r8gj6"] Nov 28 21:15:32 crc kubenswrapper[4957]: I1128 21:15:32.917870 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" path="/var/lib/kubelet/pods/f0caaf41-287f-4108-8226-5da957a9ec51/volumes" Nov 28 21:15:36 crc kubenswrapper[4957]: I1128 21:15:36.050450 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8f7b695b5-9dcxn" Nov 28 21:15:36 crc kubenswrapper[4957]: I1128 21:15:36.115414 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-565895bd86-z2gdh"] Nov 28 21:15:36 crc kubenswrapper[4957]: I1128 21:15:36.115674 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-565895bd86-z2gdh" podUID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" containerName="heat-engine" containerID="cri-o://ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3" gracePeriod=60 Nov 28 21:15:37 crc kubenswrapper[4957]: I1128 21:15:37.524195 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-58bdf58698-25xts" Nov 28 21:15:37 crc kubenswrapper[4957]: I1128 21:15:37.614350 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-768b697649-7gz8m" Nov 28 21:15:37 crc kubenswrapper[4957]: I1128 21:15:37.616028 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8445cd679c-6kwss"] Nov 28 21:15:37 crc kubenswrapper[4957]: I1128 21:15:37.616298 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-8445cd679c-6kwss" podUID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" containerName="heat-api" containerID="cri-o://9e5543fca250dfea7b3fb960cb4ae737ef83d43ccf1b3b1e998e737df1a1d148" gracePeriod=60 Nov 28 21:15:37 crc kubenswrapper[4957]: I1128 21:15:37.698124 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f4944777d-4svqx"] Nov 28 21:15:37 crc kubenswrapper[4957]: I1128 21:15:37.698401 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5f4944777d-4svqx" podUID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" containerName="heat-cfnapi" containerID="cri-o://74ced27fc87051adc01aec7d44561800c7b4a5356f33699a5af97194890c82eb" gracePeriod=60 Nov 28 21:15:40 crc kubenswrapper[4957]: I1128 21:15:40.765456 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-8445cd679c-6kwss" podUID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": read tcp 10.217.0.2:59866->10.217.0.221:8004: read: connection reset by peer" Nov 28 21:15:40 crc kubenswrapper[4957]: I1128 21:15:40.844125 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5f4944777d-4svqx" podUID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": read tcp 10.217.0.2:52220->10.217.0.222:8000: read: connection reset by peer" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.031415 4957 generic.go:334] "Generic (PLEG): container finished" podID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" containerID="9e5543fca250dfea7b3fb960cb4ae737ef83d43ccf1b3b1e998e737df1a1d148" exitCode=0 Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.031763 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8445cd679c-6kwss" event={"ID":"bb0b8e5f-611a-452d-9f0e-229f445c77d6","Type":"ContainerDied","Data":"9e5543fca250dfea7b3fb960cb4ae737ef83d43ccf1b3b1e998e737df1a1d148"} Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.041640 4957 generic.go:334] "Generic (PLEG): container finished" podID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" containerID="74ced27fc87051adc01aec7d44561800c7b4a5356f33699a5af97194890c82eb" exitCode=0 Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.041701 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f4944777d-4svqx" event={"ID":"b7b5b68f-4c6d-4002-a784-5f8e85470f5f","Type":"ContainerDied","Data":"74ced27fc87051adc01aec7d44561800c7b4a5356f33699a5af97194890c82eb"} Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.327484 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.472416 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.508826 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-public-tls-certs\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.508917 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c659\" (UniqueName: \"kubernetes.io/projected/bb0b8e5f-611a-452d-9f0e-229f445c77d6-kube-api-access-9c659\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.508980 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data-custom\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.509012 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-combined-ca-bundle\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.509112 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-internal-tls-certs\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.509309 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.516346 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.522502 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0b8e5f-611a-452d-9f0e-229f445c77d6-kube-api-access-9c659" (OuterVolumeSpecName: "kube-api-access-9c659") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "kube-api-access-9c659". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.568401 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.585574 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611007 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data" (OuterVolumeSpecName: "config-data") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611375 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9p2k\" (UniqueName: \"kubernetes.io/projected/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-kube-api-access-r9p2k\") pod \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611437 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data\") pod \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611465 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-public-tls-certs\") pod \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611694 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data-custom\") pod \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611728 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data\") pod \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\" (UID: \"bb0b8e5f-611a-452d-9f0e-229f445c77d6\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611785 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-combined-ca-bundle\") pod \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.611847 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-internal-tls-certs\") pod \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\" (UID: \"b7b5b68f-4c6d-4002-a784-5f8e85470f5f\") " Nov 28 21:15:41 crc kubenswrapper[4957]: W1128 21:15:41.612616 4957 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bb0b8e5f-611a-452d-9f0e-229f445c77d6/volumes/kubernetes.io~secret/config-data Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.613023 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data" (OuterVolumeSpecName: "config-data") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.613618 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb0b8e5f-611a-452d-9f0e-229f445c77d6" (UID: "bb0b8e5f-611a-452d-9f0e-229f445c77d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.616991 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-kube-api-access-r9p2k" (OuterVolumeSpecName: "kube-api-access-r9p2k") pod "b7b5b68f-4c6d-4002-a784-5f8e85470f5f" (UID: "b7b5b68f-4c6d-4002-a784-5f8e85470f5f"). InnerVolumeSpecName "kube-api-access-r9p2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.617251 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.617285 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.617354 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c659\" (UniqueName: \"kubernetes.io/projected/bb0b8e5f-611a-452d-9f0e-229f445c77d6-kube-api-access-9c659\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.617369 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.617381 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.617394 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0b8e5f-611a-452d-9f0e-229f445c77d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.625809 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b7b5b68f-4c6d-4002-a784-5f8e85470f5f" (UID: "b7b5b68f-4c6d-4002-a784-5f8e85470f5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.659567 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7b5b68f-4c6d-4002-a784-5f8e85470f5f" (UID: "b7b5b68f-4c6d-4002-a784-5f8e85470f5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.678542 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b7b5b68f-4c6d-4002-a784-5f8e85470f5f" (UID: "b7b5b68f-4c6d-4002-a784-5f8e85470f5f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.687485 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data" (OuterVolumeSpecName: "config-data") pod "b7b5b68f-4c6d-4002-a784-5f8e85470f5f" (UID: "b7b5b68f-4c6d-4002-a784-5f8e85470f5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.689701 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b7b5b68f-4c6d-4002-a784-5f8e85470f5f" (UID: "b7b5b68f-4c6d-4002-a784-5f8e85470f5f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.719977 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.720027 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9p2k\" (UniqueName: \"kubernetes.io/projected/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-kube-api-access-r9p2k\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.720047 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.720058 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.720068 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:41 crc kubenswrapper[4957]: I1128 21:15:41.720077 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b5b68f-4c6d-4002-a784-5f8e85470f5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.056992 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f4944777d-4svqx" event={"ID":"b7b5b68f-4c6d-4002-a784-5f8e85470f5f","Type":"ContainerDied","Data":"3f042530c7ef3952daba8695f3cde5724dc8ccc90e3a0b6f2568ecb6f272f8d9"} Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.057038 4957 scope.go:117] "RemoveContainer" containerID="74ced27fc87051adc01aec7d44561800c7b4a5356f33699a5af97194890c82eb" Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.057154 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f4944777d-4svqx" Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.065604 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8445cd679c-6kwss" event={"ID":"bb0b8e5f-611a-452d-9f0e-229f445c77d6","Type":"ContainerDied","Data":"70b7b5f35bbcd483bba86ff7b332d405cd3b375872c4877fff48f95794cdcfe5"} Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.065682 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8445cd679c-6kwss" Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.108300 4957 scope.go:117] "RemoveContainer" containerID="9e5543fca250dfea7b3fb960cb4ae737ef83d43ccf1b3b1e998e737df1a1d148" Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.117303 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8445cd679c-6kwss"] Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.131444 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8445cd679c-6kwss"] Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.144670 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f4944777d-4svqx"] Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.154500 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5f4944777d-4svqx"] Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.829132 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" path="/var/lib/kubelet/pods/b7b5b68f-4c6d-4002-a784-5f8e85470f5f/volumes" Nov 28 21:15:42 crc kubenswrapper[4957]: I1128 21:15:42.830844 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" path="/var/lib/kubelet/pods/bb0b8e5f-611a-452d-9f0e-229f445c77d6/volumes" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.087090 4957 generic.go:334] "Generic (PLEG): container finished" podID="7b6a2345-f928-41e0-bb0d-efd6ca576e42" containerID="cc1333f2f0a8671476417af29178a434e1fe88f6247529ff6f3a76835ff20677" exitCode=0 Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.087458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b6a2345-f928-41e0-bb0d-efd6ca576e42","Type":"ContainerDied","Data":"cc1333f2f0a8671476417af29178a434e1fe88f6247529ff6f3a76835ff20677"} Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.395417 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-nxvvm"] Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.406755 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-nxvvm"] Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.511387 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-jm9wx"] Nov 28 21:15:43 crc kubenswrapper[4957]: E1128 21:15:43.512324 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" containerName="dnsmasq-dns" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512352 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" containerName="dnsmasq-dns" Nov 28 21:15:43 crc kubenswrapper[4957]: E1128 21:15:43.512376 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" containerName="heat-api" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512386 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" containerName="heat-api" Nov 28 21:15:43 crc kubenswrapper[4957]: E1128 21:15:43.512412 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" containerName="init" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512421 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" containerName="init" Nov 28 21:15:43 crc kubenswrapper[4957]: E1128 21:15:43.512452 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" containerName="heat-cfnapi" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512461 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" containerName="heat-cfnapi" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512747 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b5b68f-4c6d-4002-a784-5f8e85470f5f" containerName="heat-cfnapi" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512770 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0b8e5f-611a-452d-9f0e-229f445c77d6" containerName="heat-api" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.512789 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0caaf41-287f-4108-8226-5da957a9ec51" containerName="dnsmasq-dns" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.513786 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.516009 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.526782 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jm9wx"] Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.686465 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-scripts\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.686585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-combined-ca-bundle\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.686622 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-config-data\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.686683 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4z4\" (UniqueName: \"kubernetes.io/projected/8e5a6783-443c-4aa3-8985-2476a17d6f48-kube-api-access-vp4z4\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.788754 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-config-data\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.788859 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4z4\" (UniqueName: \"kubernetes.io/projected/8e5a6783-443c-4aa3-8985-2476a17d6f48-kube-api-access-vp4z4\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.789052 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-scripts\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.789095 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-combined-ca-bundle\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.794798 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-combined-ca-bundle\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.794991 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-config-data\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.816555 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-scripts\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.836903 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4z4\" (UniqueName: \"kubernetes.io/projected/8e5a6783-443c-4aa3-8985-2476a17d6f48-kube-api-access-vp4z4\") pod \"aodh-db-sync-jm9wx\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:43 crc kubenswrapper[4957]: I1128 21:15:43.841754 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.121890 4957 generic.go:334] "Generic (PLEG): container finished" podID="39bd199d-d600-4b4a-9d31-831e346ea98d" containerID="0bbac4e087045c279167806300d7ae6a37f69f78f1b4731105419e1c1184cd80" exitCode=0 Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.121993 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"39bd199d-d600-4b4a-9d31-831e346ea98d","Type":"ContainerDied","Data":"0bbac4e087045c279167806300d7ae6a37f69f78f1b4731105419e1c1184cd80"} Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.146791 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b6a2345-f928-41e0-bb0d-efd6ca576e42","Type":"ContainerStarted","Data":"576da965b06d4a32e2262d04960e82069754742ce9705fa7c4ed23011637d571"} Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.147846 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.189552 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.189530722 podStartE2EDuration="37.189530722s" podCreationTimestamp="2025-11-28 21:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:15:44.170872413 +0000 UTC m=+1583.639520332" watchObservedRunningTime="2025-11-28 21:15:44.189530722 +0000 UTC m=+1583.658178631" Nov 28 21:15:44 crc kubenswrapper[4957]: W1128 21:15:44.498538 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e5a6783_443c_4aa3_8985_2476a17d6f48.slice/crio-7a8c6c378359cebdc75b1a15f3a38bc2c681b41d46ffe06c35326279d6fd6ad2 WatchSource:0}: Error finding container 7a8c6c378359cebdc75b1a15f3a38bc2c681b41d46ffe06c35326279d6fd6ad2: Status 404 returned error can't find the container with id 7a8c6c378359cebdc75b1a15f3a38bc2c681b41d46ffe06c35326279d6fd6ad2 Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.500645 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.504475 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jm9wx"] Nov 28 21:15:44 crc kubenswrapper[4957]: I1128 21:15:44.829851 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e06d4815-a61b-46a8-bb15-6f7e21b2dfd4" path="/var/lib/kubelet/pods/e06d4815-a61b-46a8-bb15-6f7e21b2dfd4/volumes" Nov 28 21:15:45 crc kubenswrapper[4957]: I1128 21:15:45.169778 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"39bd199d-d600-4b4a-9d31-831e346ea98d","Type":"ContainerStarted","Data":"361d529d0d23affa7a9b7166589087fc7fd911cb5218fb0c1583059f96a4222e"} Nov 28 21:15:45 crc kubenswrapper[4957]: I1128 21:15:45.170388 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 21:15:45 crc kubenswrapper[4957]: I1128 21:15:45.171565 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jm9wx" event={"ID":"8e5a6783-443c-4aa3-8985-2476a17d6f48","Type":"ContainerStarted","Data":"7a8c6c378359cebdc75b1a15f3a38bc2c681b41d46ffe06c35326279d6fd6ad2"} Nov 28 21:15:45 crc kubenswrapper[4957]: I1128 21:15:45.216809 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.216784381 podStartE2EDuration="37.216784381s" podCreationTimestamp="2025-11-28 21:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 21:15:45.197912287 +0000 UTC m=+1584.666560216" watchObservedRunningTime="2025-11-28 21:15:45.216784381 +0000 UTC m=+1584.685432290" Nov 28 21:15:45 crc kubenswrapper[4957]: E1128 21:15:45.972263 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:15:45 crc kubenswrapper[4957]: E1128 21:15:45.973459 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:15:45 crc kubenswrapper[4957]: E1128 21:15:45.974966 4957 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 21:15:45 crc kubenswrapper[4957]: E1128 21:15:45.975008 4957 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-565895bd86-z2gdh" podUID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" containerName="heat-engine" Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.947604 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp"] Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.950563 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.953463 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.954030 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.954641 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.954917 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:15:49 crc kubenswrapper[4957]: I1128 21:15:49.967940 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp"] Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.067783 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgb7c\" (UniqueName: \"kubernetes.io/projected/1499e3ce-e9eb-4774-9f22-fbac5300742b-kube-api-access-kgb7c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.068349 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.068407 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.068585 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.171305 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgb7c\" (UniqueName: \"kubernetes.io/projected/1499e3ce-e9eb-4774-9f22-fbac5300742b-kube-api-access-kgb7c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.171630 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.171710 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.171854 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.178790 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.195868 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.196613 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.203966 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgb7c\" (UniqueName: \"kubernetes.io/projected/1499e3ce-e9eb-4774-9f22-fbac5300742b-kube-api-access-kgb7c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.249146 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jm9wx" event={"ID":"8e5a6783-443c-4aa3-8985-2476a17d6f48","Type":"ContainerStarted","Data":"ea2ccdf1284a7aa7d8981ca6f9b75fd4358d74e94dfc58ce135d4c6d08232b24"} Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.274249 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-jm9wx" podStartSLOduration=2.422931431 podStartE2EDuration="7.274231825s" podCreationTimestamp="2025-11-28 21:15:43 +0000 UTC" firstStartedPulling="2025-11-28 21:15:44.500399705 +0000 UTC m=+1583.969047614" lastFinishedPulling="2025-11-28 21:15:49.351700099 +0000 UTC m=+1588.820348008" observedRunningTime="2025-11-28 21:15:50.269044087 +0000 UTC m=+1589.737691996" watchObservedRunningTime="2025-11-28 21:15:50.274231825 +0000 UTC m=+1589.742879734" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.306440 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:15:50 crc kubenswrapper[4957]: I1128 21:15:50.976244 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp"] Nov 28 21:15:50 crc kubenswrapper[4957]: W1128 21:15:50.980058 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1499e3ce_e9eb_4774_9f22_fbac5300742b.slice/crio-21f2e5712b1028708b96e3b7912c23f2d629adea3b4edd45dda90d225b23a940 WatchSource:0}: Error finding container 21f2e5712b1028708b96e3b7912c23f2d629adea3b4edd45dda90d225b23a940: Status 404 returned error can't find the container with id 21f2e5712b1028708b96e3b7912c23f2d629adea3b4edd45dda90d225b23a940 Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.289760 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" event={"ID":"1499e3ce-e9eb-4774-9f22-fbac5300742b","Type":"ContainerStarted","Data":"21f2e5712b1028708b96e3b7912c23f2d629adea3b4edd45dda90d225b23a940"} Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.292862 4957 generic.go:334] "Generic (PLEG): container finished" podID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" containerID="ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3" exitCode=0 Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.294031 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-565895bd86-z2gdh" event={"ID":"7973a31a-9f1b-4f08-a628-b739b15e2a6d","Type":"ContainerDied","Data":"ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3"} Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.294122 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-565895bd86-z2gdh" event={"ID":"7973a31a-9f1b-4f08-a628-b739b15e2a6d","Type":"ContainerDied","Data":"5917273e458286a36a4a0c2cbf7f3612c6df28ba192da02addb5ca2a23f12625"} Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.294147 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5917273e458286a36a4a0c2cbf7f3612c6df28ba192da02addb5ca2a23f12625" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.300742 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.400487 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-combined-ca-bundle\") pod \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.400637 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data\") pod \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.400747 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data-custom\") pod \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.400770 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm5b8\" (UniqueName: \"kubernetes.io/projected/7973a31a-9f1b-4f08-a628-b739b15e2a6d-kube-api-access-dm5b8\") pod \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\" (UID: \"7973a31a-9f1b-4f08-a628-b739b15e2a6d\") " Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.411964 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7973a31a-9f1b-4f08-a628-b739b15e2a6d-kube-api-access-dm5b8" (OuterVolumeSpecName: "kube-api-access-dm5b8") pod "7973a31a-9f1b-4f08-a628-b739b15e2a6d" (UID: "7973a31a-9f1b-4f08-a628-b739b15e2a6d"). InnerVolumeSpecName "kube-api-access-dm5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.413335 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7973a31a-9f1b-4f08-a628-b739b15e2a6d" (UID: "7973a31a-9f1b-4f08-a628-b739b15e2a6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.444042 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7973a31a-9f1b-4f08-a628-b739b15e2a6d" (UID: "7973a31a-9f1b-4f08-a628-b739b15e2a6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.503955 4957 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.504531 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm5b8\" (UniqueName: \"kubernetes.io/projected/7973a31a-9f1b-4f08-a628-b739b15e2a6d-kube-api-access-dm5b8\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.504654 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.512322 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data" (OuterVolumeSpecName: "config-data") pod "7973a31a-9f1b-4f08-a628-b739b15e2a6d" (UID: "7973a31a-9f1b-4f08-a628-b739b15e2a6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:51 crc kubenswrapper[4957]: I1128 21:15:51.606334 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7973a31a-9f1b-4f08-a628-b739b15e2a6d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:52 crc kubenswrapper[4957]: I1128 21:15:52.304935 4957 generic.go:334] "Generic (PLEG): container finished" podID="8e5a6783-443c-4aa3-8985-2476a17d6f48" containerID="ea2ccdf1284a7aa7d8981ca6f9b75fd4358d74e94dfc58ce135d4c6d08232b24" exitCode=0 Nov 28 21:15:52 crc kubenswrapper[4957]: I1128 21:15:52.305300 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-565895bd86-z2gdh" Nov 28 21:15:52 crc kubenswrapper[4957]: I1128 21:15:52.305008 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jm9wx" event={"ID":"8e5a6783-443c-4aa3-8985-2476a17d6f48","Type":"ContainerDied","Data":"ea2ccdf1284a7aa7d8981ca6f9b75fd4358d74e94dfc58ce135d4c6d08232b24"} Nov 28 21:15:52 crc kubenswrapper[4957]: I1128 21:15:52.362613 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-565895bd86-z2gdh"] Nov 28 21:15:52 crc kubenswrapper[4957]: I1128 21:15:52.377380 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-565895bd86-z2gdh"] Nov 28 21:15:52 crc kubenswrapper[4957]: I1128 21:15:52.826742 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" path="/var/lib/kubelet/pods/7973a31a-9f1b-4f08-a628-b739b15e2a6d/volumes" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.726335 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.884940 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-scripts\") pod \"8e5a6783-443c-4aa3-8985-2476a17d6f48\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.885084 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-config-data\") pod \"8e5a6783-443c-4aa3-8985-2476a17d6f48\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.885198 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4z4\" (UniqueName: \"kubernetes.io/projected/8e5a6783-443c-4aa3-8985-2476a17d6f48-kube-api-access-vp4z4\") pod \"8e5a6783-443c-4aa3-8985-2476a17d6f48\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.885316 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-combined-ca-bundle\") pod \"8e5a6783-443c-4aa3-8985-2476a17d6f48\" (UID: \"8e5a6783-443c-4aa3-8985-2476a17d6f48\") " Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.892257 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-scripts" (OuterVolumeSpecName: "scripts") pod "8e5a6783-443c-4aa3-8985-2476a17d6f48" (UID: "8e5a6783-443c-4aa3-8985-2476a17d6f48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.892599 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5a6783-443c-4aa3-8985-2476a17d6f48-kube-api-access-vp4z4" (OuterVolumeSpecName: "kube-api-access-vp4z4") pod "8e5a6783-443c-4aa3-8985-2476a17d6f48" (UID: "8e5a6783-443c-4aa3-8985-2476a17d6f48"). InnerVolumeSpecName "kube-api-access-vp4z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.931684 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e5a6783-443c-4aa3-8985-2476a17d6f48" (UID: "8e5a6783-443c-4aa3-8985-2476a17d6f48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.935190 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-config-data" (OuterVolumeSpecName: "config-data") pod "8e5a6783-443c-4aa3-8985-2476a17d6f48" (UID: "8e5a6783-443c-4aa3-8985-2476a17d6f48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.993037 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.993075 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.993087 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5a6783-443c-4aa3-8985-2476a17d6f48-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:53 crc kubenswrapper[4957]: I1128 21:15:53.993099 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4z4\" (UniqueName: \"kubernetes.io/projected/8e5a6783-443c-4aa3-8985-2476a17d6f48-kube-api-access-vp4z4\") on node \"crc\" DevicePath \"\"" Nov 28 21:15:54 crc kubenswrapper[4957]: I1128 21:15:54.338102 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jm9wx" event={"ID":"8e5a6783-443c-4aa3-8985-2476a17d6f48","Type":"ContainerDied","Data":"7a8c6c378359cebdc75b1a15f3a38bc2c681b41d46ffe06c35326279d6fd6ad2"} Nov 28 21:15:54 crc kubenswrapper[4957]: I1128 21:15:54.338377 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8c6c378359cebdc75b1a15f3a38bc2c681b41d46ffe06c35326279d6fd6ad2" Nov 28 21:15:54 crc kubenswrapper[4957]: I1128 21:15:54.338443 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jm9wx" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.445374 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l7wvf"] Nov 28 21:15:57 crc kubenswrapper[4957]: E1128 21:15:57.446741 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" containerName="heat-engine" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.446758 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" containerName="heat-engine" Nov 28 21:15:57 crc kubenswrapper[4957]: E1128 21:15:57.446791 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5a6783-443c-4aa3-8985-2476a17d6f48" containerName="aodh-db-sync" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.446797 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5a6783-443c-4aa3-8985-2476a17d6f48" containerName="aodh-db-sync" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.467991 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5a6783-443c-4aa3-8985-2476a17d6f48" containerName="aodh-db-sync" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.468068 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="7973a31a-9f1b-4f08-a628-b739b15e2a6d" containerName="heat-engine" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.475726 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.512609 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7wvf"] Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.629058 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-catalog-content\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.629256 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpphw\" (UniqueName: \"kubernetes.io/projected/b239e305-9ed3-4422-863b-e6642f144d23-kube-api-access-hpphw\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.629286 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-utilities\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.731238 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpphw\" (UniqueName: \"kubernetes.io/projected/b239e305-9ed3-4422-863b-e6642f144d23-kube-api-access-hpphw\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.731298 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-utilities\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.731401 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-catalog-content\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.731946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-catalog-content\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.731973 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-utilities\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.759046 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpphw\" (UniqueName: \"kubernetes.io/projected/b239e305-9ed3-4422-863b-e6642f144d23-kube-api-access-hpphw\") pod \"redhat-marketplace-l7wvf\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:57 crc kubenswrapper[4957]: I1128 21:15:57.820825 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.026407 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.540719 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.540990 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-api" containerID="cri-o://0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe" gracePeriod=30 Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.541037 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-listener" containerID="cri-o://69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47" gracePeriod=30 Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.541062 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-notifier" containerID="cri-o://8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471" gracePeriod=30 Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.541105 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-evaluator" containerID="cri-o://03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744" gracePeriod=30 Nov 28 21:15:58 crc kubenswrapper[4957]: I1128 21:15:58.984354 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 21:15:59 crc kubenswrapper[4957]: I1128 21:15:59.478576 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerID="03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744" exitCode=0 Nov 28 21:15:59 crc kubenswrapper[4957]: I1128 21:15:59.478970 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerID="0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe" exitCode=0 Nov 28 21:15:59 crc kubenswrapper[4957]: I1128 21:15:59.478667 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerDied","Data":"03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744"} Nov 28 21:15:59 crc kubenswrapper[4957]: I1128 21:15:59.479045 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerDied","Data":"0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe"} Nov 28 21:16:01 crc kubenswrapper[4957]: I1128 21:16:01.141645 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7wvf"] Nov 28 21:16:01 crc kubenswrapper[4957]: I1128 21:16:01.501092 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" event={"ID":"1499e3ce-e9eb-4774-9f22-fbac5300742b","Type":"ContainerStarted","Data":"e85108dd8cf87e8a99744bc68b7ed03ad33461b1a0ac793bd56c6258dcd144f1"} Nov 28 21:16:01 crc kubenswrapper[4957]: I1128 21:16:01.503720 4957 generic.go:334] "Generic (PLEG): container finished" podID="b239e305-9ed3-4422-863b-e6642f144d23" containerID="ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847" exitCode=0 Nov 28 21:16:01 crc kubenswrapper[4957]: I1128 21:16:01.503765 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerDied","Data":"ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847"} Nov 28 21:16:01 crc kubenswrapper[4957]: I1128 21:16:01.503790 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerStarted","Data":"4f248d397244e6b1d9846363f771bafc9b929e45525fab1069a114ddbacf4322"} Nov 28 21:16:01 crc kubenswrapper[4957]: I1128 21:16:01.524317 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" podStartSLOduration=2.7997727489999997 podStartE2EDuration="12.524300227s" podCreationTimestamp="2025-11-28 21:15:49 +0000 UTC" firstStartedPulling="2025-11-28 21:15:50.987496604 +0000 UTC m=+1590.456144513" lastFinishedPulling="2025-11-28 21:16:00.712024072 +0000 UTC m=+1600.180671991" observedRunningTime="2025-11-28 21:16:01.520490943 +0000 UTC m=+1600.989138852" watchObservedRunningTime="2025-11-28 21:16:01.524300227 +0000 UTC m=+1600.992948136" Nov 28 21:16:02 crc kubenswrapper[4957]: I1128 21:16:02.518796 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerID="69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47" exitCode=0 Nov 28 21:16:02 crc kubenswrapper[4957]: I1128 21:16:02.518868 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerDied","Data":"69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47"} Nov 28 21:16:02 crc kubenswrapper[4957]: I1128 21:16:02.522583 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerStarted","Data":"f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26"} Nov 28 21:16:03 crc kubenswrapper[4957]: I1128 21:16:03.535356 4957 generic.go:334] "Generic (PLEG): container finished" podID="b239e305-9ed3-4422-863b-e6642f144d23" containerID="f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26" exitCode=0 Nov 28 21:16:03 crc kubenswrapper[4957]: I1128 21:16:03.535519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerDied","Data":"f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26"} Nov 28 21:16:05 crc kubenswrapper[4957]: I1128 21:16:05.558592 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerStarted","Data":"658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05"} Nov 28 21:16:05 crc kubenswrapper[4957]: I1128 21:16:05.584491 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l7wvf" podStartSLOduration=5.183974448 podStartE2EDuration="8.584471057s" podCreationTimestamp="2025-11-28 21:15:57 +0000 UTC" firstStartedPulling="2025-11-28 21:16:01.505532345 +0000 UTC m=+1600.974180254" lastFinishedPulling="2025-11-28 21:16:04.906028954 +0000 UTC m=+1604.374676863" observedRunningTime="2025-11-28 21:16:05.579507895 +0000 UTC m=+1605.048155804" watchObservedRunningTime="2025-11-28 21:16:05.584471057 +0000 UTC m=+1605.053118966" Nov 28 21:16:07 crc kubenswrapper[4957]: I1128 21:16:07.821899 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:16:07 crc kubenswrapper[4957]: I1128 21:16:07.822313 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:16:07 crc kubenswrapper[4957]: I1128 21:16:07.882969 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:16:12 crc kubenswrapper[4957]: I1128 21:16:12.643256 4957 generic.go:334] "Generic (PLEG): container finished" podID="1499e3ce-e9eb-4774-9f22-fbac5300742b" containerID="e85108dd8cf87e8a99744bc68b7ed03ad33461b1a0ac793bd56c6258dcd144f1" exitCode=0 Nov 28 21:16:12 crc kubenswrapper[4957]: I1128 21:16:12.643359 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" event={"ID":"1499e3ce-e9eb-4774-9f22-fbac5300742b","Type":"ContainerDied","Data":"e85108dd8cf87e8a99744bc68b7ed03ad33461b1a0ac793bd56c6258dcd144f1"} Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.135733 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.218835 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-config-data\") pod \"1bbe71a6-9b0b-4d76-a004-6facaa044521\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.219072 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59mkf\" (UniqueName: \"kubernetes.io/projected/1bbe71a6-9b0b-4d76-a004-6facaa044521-kube-api-access-59mkf\") pod \"1bbe71a6-9b0b-4d76-a004-6facaa044521\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.219138 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-scripts\") pod \"1bbe71a6-9b0b-4d76-a004-6facaa044521\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.219196 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-public-tls-certs\") pod \"1bbe71a6-9b0b-4d76-a004-6facaa044521\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.219255 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-internal-tls-certs\") pod \"1bbe71a6-9b0b-4d76-a004-6facaa044521\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.219314 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-combined-ca-bundle\") pod \"1bbe71a6-9b0b-4d76-a004-6facaa044521\" (UID: \"1bbe71a6-9b0b-4d76-a004-6facaa044521\") " Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.229283 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-scripts" (OuterVolumeSpecName: "scripts") pod "1bbe71a6-9b0b-4d76-a004-6facaa044521" (UID: "1bbe71a6-9b0b-4d76-a004-6facaa044521"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.231915 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbe71a6-9b0b-4d76-a004-6facaa044521-kube-api-access-59mkf" (OuterVolumeSpecName: "kube-api-access-59mkf") pod "1bbe71a6-9b0b-4d76-a004-6facaa044521" (UID: "1bbe71a6-9b0b-4d76-a004-6facaa044521"). InnerVolumeSpecName "kube-api-access-59mkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.293634 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1bbe71a6-9b0b-4d76-a004-6facaa044521" (UID: "1bbe71a6-9b0b-4d76-a004-6facaa044521"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.296774 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1bbe71a6-9b0b-4d76-a004-6facaa044521" (UID: "1bbe71a6-9b0b-4d76-a004-6facaa044521"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.321694 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59mkf\" (UniqueName: \"kubernetes.io/projected/1bbe71a6-9b0b-4d76-a004-6facaa044521-kube-api-access-59mkf\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.321758 4957 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.321768 4957 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.321776 4957 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.354305 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bbe71a6-9b0b-4d76-a004-6facaa044521" (UID: "1bbe71a6-9b0b-4d76-a004-6facaa044521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.361849 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-config-data" (OuterVolumeSpecName: "config-data") pod "1bbe71a6-9b0b-4d76-a004-6facaa044521" (UID: "1bbe71a6-9b0b-4d76-a004-6facaa044521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.424804 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.424857 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbe71a6-9b0b-4d76-a004-6facaa044521-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.659035 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerID="8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471" exitCode=0 Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.659092 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerDied","Data":"8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471"} Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.659141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bbe71a6-9b0b-4d76-a004-6facaa044521","Type":"ContainerDied","Data":"09b7d3d21bf5d30c7598e90bcce18b52c53480ee6726e127b037eca88770257b"} Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.659150 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.659162 4957 scope.go:117] "RemoveContainer" containerID="69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.744441 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.757678 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.781853 4957 scope.go:117] "RemoveContainer" containerID="8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.801791 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.802280 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-listener" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802296 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-listener" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.802305 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-notifier" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802311 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-notifier" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.802332 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-evaluator" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802338 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-evaluator" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.802371 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-api" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802376 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-api" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802577 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-evaluator" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802591 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-notifier" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802615 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-api" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.802632 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" containerName="aodh-listener" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.804805 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.808545 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.808599 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-72swb" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.808862 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.808870 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.818313 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.826138 4957 scope.go:117] "RemoveContainer" containerID="03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.828831 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.868383 4957 scope.go:117] "RemoveContainer" containerID="0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.893224 4957 scope.go:117] "RemoveContainer" containerID="69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.893700 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47\": container with ID starting with 69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47 not found: ID does not exist" containerID="69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.893742 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47"} err="failed to get container status \"69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47\": rpc error: code = NotFound desc = could not find container \"69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47\": container with ID starting with 69c105e37f5ba91bdaf83df353fec317e9f8c248dba2b653214c5a19e47a2f47 not found: ID does not exist" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.893769 4957 scope.go:117] "RemoveContainer" containerID="8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.894280 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471\": container with ID starting with 8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471 not found: ID does not exist" containerID="8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.894301 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471"} err="failed to get container status \"8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471\": rpc error: code = NotFound desc = could not find container \"8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471\": container with ID starting with 8e3343066c6e27df38aa84bd03d5d14238cd41559aa970a6d3fde9490a34c471 not found: ID does not exist" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.894319 4957 scope.go:117] "RemoveContainer" containerID="03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.894579 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744\": container with ID starting with 03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744 not found: ID does not exist" containerID="03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.894595 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744"} err="failed to get container status \"03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744\": rpc error: code = NotFound desc = could not find container \"03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744\": container with ID starting with 03ef6a95fa48382489451b3bb1ce18ce6eb4a488ca7d01a5101de064582f7744 not found: ID does not exist" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.894607 4957 scope.go:117] "RemoveContainer" containerID="0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe" Nov 28 21:16:13 crc kubenswrapper[4957]: E1128 21:16:13.895221 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe\": container with ID starting with 0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe not found: ID does not exist" containerID="0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.895248 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe"} err="failed to get container status \"0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe\": rpc error: code = NotFound desc = could not find container \"0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe\": container with ID starting with 0688b53b33298e6096b9fee42a68f1ba52cbd19afdfbfb083e035108194d6ebe not found: ID does not exist" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.934753 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98b96\" (UniqueName: \"kubernetes.io/projected/b4c649b0-9467-4e39-98b6-54e217030877-kube-api-access-98b96\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.934835 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-scripts\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.935056 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-config-data\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.936204 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-public-tls-certs\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.938863 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-internal-tls-certs\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:13 crc kubenswrapper[4957]: I1128 21:16:13.938899 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.041136 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-scripts\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.041246 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-config-data\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.041345 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-public-tls-certs\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.041403 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-internal-tls-certs\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.041419 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.041491 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98b96\" (UniqueName: \"kubernetes.io/projected/b4c649b0-9467-4e39-98b6-54e217030877-kube-api-access-98b96\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.045662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-scripts\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.045828 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-public-tls-certs\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.045871 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.046413 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-internal-tls-certs\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.047343 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c649b0-9467-4e39-98b6-54e217030877-config-data\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.061267 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98b96\" (UniqueName: \"kubernetes.io/projected/b4c649b0-9467-4e39-98b6-54e217030877-kube-api-access-98b96\") pod \"aodh-0\" (UID: \"b4c649b0-9467-4e39-98b6-54e217030877\") " pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.135907 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.249828 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.356886 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgb7c\" (UniqueName: \"kubernetes.io/projected/1499e3ce-e9eb-4774-9f22-fbac5300742b-kube-api-access-kgb7c\") pod \"1499e3ce-e9eb-4774-9f22-fbac5300742b\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.356965 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-repo-setup-combined-ca-bundle\") pod \"1499e3ce-e9eb-4774-9f22-fbac5300742b\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.356995 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-inventory\") pod \"1499e3ce-e9eb-4774-9f22-fbac5300742b\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.357098 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-ssh-key\") pod \"1499e3ce-e9eb-4774-9f22-fbac5300742b\" (UID: \"1499e3ce-e9eb-4774-9f22-fbac5300742b\") " Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.364579 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1499e3ce-e9eb-4774-9f22-fbac5300742b" (UID: "1499e3ce-e9eb-4774-9f22-fbac5300742b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.365449 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1499e3ce-e9eb-4774-9f22-fbac5300742b-kube-api-access-kgb7c" (OuterVolumeSpecName: "kube-api-access-kgb7c") pod "1499e3ce-e9eb-4774-9f22-fbac5300742b" (UID: "1499e3ce-e9eb-4774-9f22-fbac5300742b"). InnerVolumeSpecName "kube-api-access-kgb7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.395843 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-inventory" (OuterVolumeSpecName: "inventory") pod "1499e3ce-e9eb-4774-9f22-fbac5300742b" (UID: "1499e3ce-e9eb-4774-9f22-fbac5300742b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.409380 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1499e3ce-e9eb-4774-9f22-fbac5300742b" (UID: "1499e3ce-e9eb-4774-9f22-fbac5300742b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.460159 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgb7c\" (UniqueName: \"kubernetes.io/projected/1499e3ce-e9eb-4774-9f22-fbac5300742b-kube-api-access-kgb7c\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.460195 4957 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.460219 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.460229 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1499e3ce-e9eb-4774-9f22-fbac5300742b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.632636 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 21:16:14 crc kubenswrapper[4957]: W1128 21:16:14.632731 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4c649b0_9467_4e39_98b6_54e217030877.slice/crio-c1f482f220c76b71e9c8572d5f71cbad5e17525a31355591583248f15dfe10e3 WatchSource:0}: Error finding container c1f482f220c76b71e9c8572d5f71cbad5e17525a31355591583248f15dfe10e3: Status 404 returned error can't find the container with id c1f482f220c76b71e9c8572d5f71cbad5e17525a31355591583248f15dfe10e3 Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.672052 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" event={"ID":"1499e3ce-e9eb-4774-9f22-fbac5300742b","Type":"ContainerDied","Data":"21f2e5712b1028708b96e3b7912c23f2d629adea3b4edd45dda90d225b23a940"} Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.672113 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f2e5712b1028708b96e3b7912c23f2d629adea3b4edd45dda90d225b23a940" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.673294 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.674971 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b4c649b0-9467-4e39-98b6-54e217030877","Type":"ContainerStarted","Data":"c1f482f220c76b71e9c8572d5f71cbad5e17525a31355591583248f15dfe10e3"} Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.748130 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s"] Nov 28 21:16:14 crc kubenswrapper[4957]: E1128 21:16:14.748611 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1499e3ce-e9eb-4774-9f22-fbac5300742b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.748628 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1499e3ce-e9eb-4774-9f22-fbac5300742b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.748918 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1499e3ce-e9eb-4774-9f22-fbac5300742b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.749778 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.751539 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.753509 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.753549 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.753740 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.763791 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s"] Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.831589 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbe71a6-9b0b-4d76-a004-6facaa044521" path="/var/lib/kubelet/pods/1bbe71a6-9b0b-4d76-a004-6facaa044521/volumes" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.868525 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.868974 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.869245 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrzm\" (UniqueName: \"kubernetes.io/projected/38c81f98-baf0-45aa-a33e-566697f7673c-kube-api-access-5mrzm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.971785 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.972038 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.972310 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mrzm\" (UniqueName: \"kubernetes.io/projected/38c81f98-baf0-45aa-a33e-566697f7673c-kube-api-access-5mrzm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.976813 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.977443 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:14 crc kubenswrapper[4957]: I1128 21:16:14.996031 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mrzm\" (UniqueName: \"kubernetes.io/projected/38c81f98-baf0-45aa-a33e-566697f7673c-kube-api-access-5mrzm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d6v7s\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:15 crc kubenswrapper[4957]: I1128 21:16:15.094771 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:15 crc kubenswrapper[4957]: I1128 21:16:15.690905 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b4c649b0-9467-4e39-98b6-54e217030877","Type":"ContainerStarted","Data":"e349bd6d92c39a1942247bd1c291c167c8ad7da4136d183c2b1a940f07f13413"} Nov 28 21:16:15 crc kubenswrapper[4957]: I1128 21:16:15.718224 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s"] Nov 28 21:16:16 crc kubenswrapper[4957]: I1128 21:16:16.704600 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" event={"ID":"38c81f98-baf0-45aa-a33e-566697f7673c","Type":"ContainerStarted","Data":"051a0563c23c3c19006a1373220ff4de9cc08036ddab2c76ebec534b4e1c3f96"} Nov 28 21:16:16 crc kubenswrapper[4957]: I1128 21:16:16.705124 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" event={"ID":"38c81f98-baf0-45aa-a33e-566697f7673c","Type":"ContainerStarted","Data":"25f778ed5daf9cb56b0a84e7a7758a7f22f436964a9b9d8599df7bfe6cd2098c"} Nov 28 21:16:16 crc kubenswrapper[4957]: I1128 21:16:16.710177 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b4c649b0-9467-4e39-98b6-54e217030877","Type":"ContainerStarted","Data":"baf21f458ffeb6463d9f79bcde57cb34718478570aa64abac189496f904f34a8"} Nov 28 21:16:16 crc kubenswrapper[4957]: I1128 21:16:16.731680 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" podStartSLOduration=2.057441328 podStartE2EDuration="2.731659627s" podCreationTimestamp="2025-11-28 21:16:14 +0000 UTC" firstStartedPulling="2025-11-28 21:16:15.724542272 +0000 UTC m=+1615.193190171" lastFinishedPulling="2025-11-28 21:16:16.398760561 +0000 UTC m=+1615.867408470" observedRunningTime="2025-11-28 21:16:16.724045459 +0000 UTC m=+1616.192693368" watchObservedRunningTime="2025-11-28 21:16:16.731659627 +0000 UTC m=+1616.200307536" Nov 28 21:16:17 crc kubenswrapper[4957]: I1128 21:16:17.887311 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:16:17 crc kubenswrapper[4957]: I1128 21:16:17.951041 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7wvf"] Nov 28 21:16:18 crc kubenswrapper[4957]: I1128 21:16:18.762157 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l7wvf" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="registry-server" containerID="cri-o://658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05" gracePeriod=2 Nov 28 21:16:18 crc kubenswrapper[4957]: I1128 21:16:18.762432 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b4c649b0-9467-4e39-98b6-54e217030877","Type":"ContainerStarted","Data":"5db44fbc818bbd15cede67006ec44eef186790d63a0000eb9a0641a7e6200e5b"} Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.459554 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.602296 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-utilities\") pod \"b239e305-9ed3-4422-863b-e6642f144d23\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.602689 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpphw\" (UniqueName: \"kubernetes.io/projected/b239e305-9ed3-4422-863b-e6642f144d23-kube-api-access-hpphw\") pod \"b239e305-9ed3-4422-863b-e6642f144d23\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.602764 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-catalog-content\") pod \"b239e305-9ed3-4422-863b-e6642f144d23\" (UID: \"b239e305-9ed3-4422-863b-e6642f144d23\") " Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.603029 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-utilities" (OuterVolumeSpecName: "utilities") pod "b239e305-9ed3-4422-863b-e6642f144d23" (UID: "b239e305-9ed3-4422-863b-e6642f144d23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.605565 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.607059 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b239e305-9ed3-4422-863b-e6642f144d23-kube-api-access-hpphw" (OuterVolumeSpecName: "kube-api-access-hpphw") pod "b239e305-9ed3-4422-863b-e6642f144d23" (UID: "b239e305-9ed3-4422-863b-e6642f144d23"). InnerVolumeSpecName "kube-api-access-hpphw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.618738 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b239e305-9ed3-4422-863b-e6642f144d23" (UID: "b239e305-9ed3-4422-863b-e6642f144d23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.707229 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpphw\" (UniqueName: \"kubernetes.io/projected/b239e305-9ed3-4422-863b-e6642f144d23-kube-api-access-hpphw\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.707268 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b239e305-9ed3-4422-863b-e6642f144d23-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.786999 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b4c649b0-9467-4e39-98b6-54e217030877","Type":"ContainerStarted","Data":"63ec69d260cfaec9cdc96fc1ac061c4a468065dad13f806925e3cc54526e5df5"} Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.791877 4957 generic.go:334] "Generic (PLEG): container finished" podID="b239e305-9ed3-4422-863b-e6642f144d23" containerID="658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05" exitCode=0 Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.791967 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerDied","Data":"658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05"} Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.791992 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7wvf" event={"ID":"b239e305-9ed3-4422-863b-e6642f144d23","Type":"ContainerDied","Data":"4f248d397244e6b1d9846363f771bafc9b929e45525fab1069a114ddbacf4322"} Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.792010 4957 scope.go:117] "RemoveContainer" containerID="658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.792402 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7wvf" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.795642 4957 generic.go:334] "Generic (PLEG): container finished" podID="38c81f98-baf0-45aa-a33e-566697f7673c" containerID="051a0563c23c3c19006a1373220ff4de9cc08036ddab2c76ebec534b4e1c3f96" exitCode=0 Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.795691 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" event={"ID":"38c81f98-baf0-45aa-a33e-566697f7673c","Type":"ContainerDied","Data":"051a0563c23c3c19006a1373220ff4de9cc08036ddab2c76ebec534b4e1c3f96"} Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.814562 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.349661524 podStartE2EDuration="6.814544946s" podCreationTimestamp="2025-11-28 21:16:13 +0000 UTC" firstStartedPulling="2025-11-28 21:16:14.636418375 +0000 UTC m=+1614.105066284" lastFinishedPulling="2025-11-28 21:16:19.101301797 +0000 UTC m=+1618.569949706" observedRunningTime="2025-11-28 21:16:19.804084519 +0000 UTC m=+1619.272732428" watchObservedRunningTime="2025-11-28 21:16:19.814544946 +0000 UTC m=+1619.283192855" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.857257 4957 scope.go:117] "RemoveContainer" containerID="f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.869987 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7wvf"] Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.880165 4957 scope.go:117] "RemoveContainer" containerID="ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.881435 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7wvf"] Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.904541 4957 scope.go:117] "RemoveContainer" containerID="658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05" Nov 28 21:16:19 crc kubenswrapper[4957]: E1128 21:16:19.905160 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05\": container with ID starting with 658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05 not found: ID does not exist" containerID="658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.905217 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05"} err="failed to get container status \"658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05\": rpc error: code = NotFound desc = could not find container \"658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05\": container with ID starting with 658074c56b98a32a7092a04b1a6987b80a8b427862bfb39e21328d3dd77e3b05 not found: ID does not exist" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.905245 4957 scope.go:117] "RemoveContainer" containerID="f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26" Nov 28 21:16:19 crc kubenswrapper[4957]: E1128 21:16:19.905562 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26\": container with ID starting with f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26 not found: ID does not exist" containerID="f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.905607 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26"} err="failed to get container status \"f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26\": rpc error: code = NotFound desc = could not find container \"f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26\": container with ID starting with f4038c3f16ade97e0e0d7f802edf80d6b5605fdfb77c44ab0f8ddca8f886da26 not found: ID does not exist" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.905637 4957 scope.go:117] "RemoveContainer" containerID="ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847" Nov 28 21:16:19 crc kubenswrapper[4957]: E1128 21:16:19.905954 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847\": container with ID starting with ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847 not found: ID does not exist" containerID="ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847" Nov 28 21:16:19 crc kubenswrapper[4957]: I1128 21:16:19.905981 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847"} err="failed to get container status \"ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847\": rpc error: code = NotFound desc = could not find container \"ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847\": container with ID starting with ffa7f1e7704af8a0a2df2a64d206847c09a40eefea938b39d53b09e741a31847 not found: ID does not exist" Nov 28 21:16:20 crc kubenswrapper[4957]: I1128 21:16:20.834237 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b239e305-9ed3-4422-863b-e6642f144d23" path="/var/lib/kubelet/pods/b239e305-9ed3-4422-863b-e6642f144d23/volumes" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.337376 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.459308 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mrzm\" (UniqueName: \"kubernetes.io/projected/38c81f98-baf0-45aa-a33e-566697f7673c-kube-api-access-5mrzm\") pod \"38c81f98-baf0-45aa-a33e-566697f7673c\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.459496 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-inventory\") pod \"38c81f98-baf0-45aa-a33e-566697f7673c\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.459626 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-ssh-key\") pod \"38c81f98-baf0-45aa-a33e-566697f7673c\" (UID: \"38c81f98-baf0-45aa-a33e-566697f7673c\") " Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.465110 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c81f98-baf0-45aa-a33e-566697f7673c-kube-api-access-5mrzm" (OuterVolumeSpecName: "kube-api-access-5mrzm") pod "38c81f98-baf0-45aa-a33e-566697f7673c" (UID: "38c81f98-baf0-45aa-a33e-566697f7673c"). InnerVolumeSpecName "kube-api-access-5mrzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.494263 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-inventory" (OuterVolumeSpecName: "inventory") pod "38c81f98-baf0-45aa-a33e-566697f7673c" (UID: "38c81f98-baf0-45aa-a33e-566697f7673c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.495380 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "38c81f98-baf0-45aa-a33e-566697f7673c" (UID: "38c81f98-baf0-45aa-a33e-566697f7673c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.562685 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mrzm\" (UniqueName: \"kubernetes.io/projected/38c81f98-baf0-45aa-a33e-566697f7673c-kube-api-access-5mrzm\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.562720 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.562731 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38c81f98-baf0-45aa-a33e-566697f7673c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.822295 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.823666 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d6v7s" event={"ID":"38c81f98-baf0-45aa-a33e-566697f7673c","Type":"ContainerDied","Data":"25f778ed5daf9cb56b0a84e7a7758a7f22f436964a9b9d8599df7bfe6cd2098c"} Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.823777 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f778ed5daf9cb56b0a84e7a7758a7f22f436964a9b9d8599df7bfe6cd2098c" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.902386 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l"] Nov 28 21:16:21 crc kubenswrapper[4957]: E1128 21:16:21.902938 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="extract-content" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.902952 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="extract-content" Nov 28 21:16:21 crc kubenswrapper[4957]: E1128 21:16:21.902980 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c81f98-baf0-45aa-a33e-566697f7673c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.902987 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c81f98-baf0-45aa-a33e-566697f7673c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 21:16:21 crc kubenswrapper[4957]: E1128 21:16:21.903011 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="extract-utilities" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.903019 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="extract-utilities" Nov 28 21:16:21 crc kubenswrapper[4957]: E1128 21:16:21.903037 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="registry-server" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.903043 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="registry-server" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.903273 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c81f98-baf0-45aa-a33e-566697f7673c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.903291 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="b239e305-9ed3-4422-863b-e6642f144d23" containerName="registry-server" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.904067 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.906732 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.906758 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.906858 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.907916 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.920301 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l"] Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.970613 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg26h\" (UniqueName: \"kubernetes.io/projected/9d722b72-77bc-4500-89f5-a13bfa49eba1-kube-api-access-dg26h\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.970742 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.970786 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:21 crc kubenswrapper[4957]: I1128 21:16:21.970839 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.072657 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.072732 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.072792 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.072855 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg26h\" (UniqueName: \"kubernetes.io/projected/9d722b72-77bc-4500-89f5-a13bfa49eba1-kube-api-access-dg26h\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.077379 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.077721 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.089231 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.089995 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg26h\" (UniqueName: \"kubernetes.io/projected/9d722b72-77bc-4500-89f5-a13bfa49eba1-kube-api-access-dg26h\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.225365 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:16:22 crc kubenswrapper[4957]: W1128 21:16:22.768437 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d722b72_77bc_4500_89f5_a13bfa49eba1.slice/crio-0bc7a9bb9d6a531d6bdaf2aeaae4a2e2d7e02bdb53ad9b283798da6f0e481fd6 WatchSource:0}: Error finding container 0bc7a9bb9d6a531d6bdaf2aeaae4a2e2d7e02bdb53ad9b283798da6f0e481fd6: Status 404 returned error can't find the container with id 0bc7a9bb9d6a531d6bdaf2aeaae4a2e2d7e02bdb53ad9b283798da6f0e481fd6 Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.770626 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l"] Nov 28 21:16:22 crc kubenswrapper[4957]: I1128 21:16:22.846666 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" event={"ID":"9d722b72-77bc-4500-89f5-a13bfa49eba1","Type":"ContainerStarted","Data":"0bc7a9bb9d6a531d6bdaf2aeaae4a2e2d7e02bdb53ad9b283798da6f0e481fd6"} Nov 28 21:16:23 crc kubenswrapper[4957]: I1128 21:16:23.860467 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" event={"ID":"9d722b72-77bc-4500-89f5-a13bfa49eba1","Type":"ContainerStarted","Data":"778db3242487620ee7b2b93fe691abc202a159b6ec8252e4b5a6531a5ae249f5"} Nov 28 21:16:23 crc kubenswrapper[4957]: I1128 21:16:23.883904 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" podStartSLOduration=2.446008763 podStartE2EDuration="2.883885861s" podCreationTimestamp="2025-11-28 21:16:21 +0000 UTC" firstStartedPulling="2025-11-28 21:16:22.771656771 +0000 UTC m=+1622.240304680" lastFinishedPulling="2025-11-28 21:16:23.209533869 +0000 UTC m=+1622.678181778" observedRunningTime="2025-11-28 21:16:23.8826287 +0000 UTC m=+1623.351276649" watchObservedRunningTime="2025-11-28 21:16:23.883885861 +0000 UTC m=+1623.352533770" Nov 28 21:16:25 crc kubenswrapper[4957]: I1128 21:16:25.027768 4957 scope.go:117] "RemoveContainer" containerID="9e2588fac43e45deff7a6705c72ef3d1d36222c2bd9883a02b68befb0275c10e" Nov 28 21:16:38 crc kubenswrapper[4957]: I1128 21:16:38.992373 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:16:38 crc kubenswrapper[4957]: I1128 21:16:38.992940 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:17:08 crc kubenswrapper[4957]: I1128 21:17:08.992407 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:17:08 crc kubenswrapper[4957]: I1128 21:17:08.993946 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:17:25 crc kubenswrapper[4957]: I1128 21:17:25.167108 4957 scope.go:117] "RemoveContainer" containerID="85d175b18493bc929f2c95709a48eb69f7434d802d92fb1ac4035467f5cee6a4" Nov 28 21:17:25 crc kubenswrapper[4957]: I1128 21:17:25.193957 4957 scope.go:117] "RemoveContainer" containerID="19dce4167d490c07875c84ad0d8387aa943adef0205cd020960713fd80a0aff6" Nov 28 21:17:25 crc kubenswrapper[4957]: I1128 21:17:25.218605 4957 scope.go:117] "RemoveContainer" containerID="650e485e9fa708f9860ca253b6582dfa15af26b04c31300a18c975992a5823fa" Nov 28 21:17:25 crc kubenswrapper[4957]: I1128 21:17:25.248902 4957 scope.go:117] "RemoveContainer" containerID="53e8138cc5582a73d87a78ce233f19b8b87c69596b872fffe143e3c864a3cb2e" Nov 28 21:17:25 crc kubenswrapper[4957]: I1128 21:17:25.298230 4957 scope.go:117] "RemoveContainer" containerID="2e392aec32cd413913190e3be82201f9703c1b06c183b83d37e6c0c757bb158c" Nov 28 21:17:38 crc kubenswrapper[4957]: I1128 21:17:38.992049 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:17:38 crc kubenswrapper[4957]: I1128 21:17:38.992899 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:17:38 crc kubenswrapper[4957]: I1128 21:17:38.993029 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:17:38 crc kubenswrapper[4957]: I1128 21:17:38.993971 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:17:38 crc kubenswrapper[4957]: I1128 21:17:38.994035 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" gracePeriod=600 Nov 28 21:17:39 crc kubenswrapper[4957]: E1128 21:17:39.133275 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:17:39 crc kubenswrapper[4957]: I1128 21:17:39.792779 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" exitCode=0 Nov 28 21:17:39 crc kubenswrapper[4957]: I1128 21:17:39.792884 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5"} Nov 28 21:17:39 crc kubenswrapper[4957]: I1128 21:17:39.793123 4957 scope.go:117] "RemoveContainer" containerID="f101a7233fc82a0da07c8fa09d39544890b7480c6753772c083a17bd3f35908d" Nov 28 21:17:39 crc kubenswrapper[4957]: I1128 21:17:39.793932 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:17:39 crc kubenswrapper[4957]: E1128 21:17:39.794308 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:17:53 crc kubenswrapper[4957]: I1128 21:17:53.814701 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:17:53 crc kubenswrapper[4957]: E1128 21:17:53.815566 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:18:07 crc kubenswrapper[4957]: I1128 21:18:07.813527 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:18:07 crc kubenswrapper[4957]: E1128 21:18:07.814305 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:18:19 crc kubenswrapper[4957]: I1128 21:18:19.813645 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:18:19 crc kubenswrapper[4957]: E1128 21:18:19.814384 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:18:25 crc kubenswrapper[4957]: I1128 21:18:25.407526 4957 scope.go:117] "RemoveContainer" containerID="eaf0d1da6584bd28d57138949347bc0ecce364690fda79c596acb838bf9a031c" Nov 28 21:18:31 crc kubenswrapper[4957]: I1128 21:18:31.821069 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:18:31 crc kubenswrapper[4957]: E1128 21:18:31.823281 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:18:46 crc kubenswrapper[4957]: I1128 21:18:46.813463 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:18:46 crc kubenswrapper[4957]: E1128 21:18:46.814267 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:18:57 crc kubenswrapper[4957]: I1128 21:18:57.812728 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:18:57 crc kubenswrapper[4957]: E1128 21:18:57.813494 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:19:08 crc kubenswrapper[4957]: I1128 21:19:08.813608 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:19:08 crc kubenswrapper[4957]: E1128 21:19:08.814418 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:19:23 crc kubenswrapper[4957]: I1128 21:19:23.813533 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:19:23 crc kubenswrapper[4957]: E1128 21:19:23.814339 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:19:25 crc kubenswrapper[4957]: I1128 21:19:25.468278 4957 scope.go:117] "RemoveContainer" containerID="ecd59e85ecbb7561c8f1e798e77079dcabe75b8b9d74e9e1d8e11884bedaf6d3" Nov 28 21:19:34 crc kubenswrapper[4957]: I1128 21:19:34.086391 4957 generic.go:334] "Generic (PLEG): container finished" podID="9d722b72-77bc-4500-89f5-a13bfa49eba1" containerID="778db3242487620ee7b2b93fe691abc202a159b6ec8252e4b5a6531a5ae249f5" exitCode=0 Nov 28 21:19:34 crc kubenswrapper[4957]: I1128 21:19:34.086479 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" event={"ID":"9d722b72-77bc-4500-89f5-a13bfa49eba1","Type":"ContainerDied","Data":"778db3242487620ee7b2b93fe691abc202a159b6ec8252e4b5a6531a5ae249f5"} Nov 28 21:19:34 crc kubenswrapper[4957]: I1128 21:19:34.813512 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:19:34 crc kubenswrapper[4957]: E1128 21:19:34.813918 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.605972 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.619908 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-inventory\") pod \"9d722b72-77bc-4500-89f5-a13bfa49eba1\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.619979 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-bootstrap-combined-ca-bundle\") pod \"9d722b72-77bc-4500-89f5-a13bfa49eba1\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.620143 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg26h\" (UniqueName: \"kubernetes.io/projected/9d722b72-77bc-4500-89f5-a13bfa49eba1-kube-api-access-dg26h\") pod \"9d722b72-77bc-4500-89f5-a13bfa49eba1\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.620238 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-ssh-key\") pod \"9d722b72-77bc-4500-89f5-a13bfa49eba1\" (UID: \"9d722b72-77bc-4500-89f5-a13bfa49eba1\") " Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.644554 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9d722b72-77bc-4500-89f5-a13bfa49eba1" (UID: "9d722b72-77bc-4500-89f5-a13bfa49eba1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.644760 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d722b72-77bc-4500-89f5-a13bfa49eba1-kube-api-access-dg26h" (OuterVolumeSpecName: "kube-api-access-dg26h") pod "9d722b72-77bc-4500-89f5-a13bfa49eba1" (UID: "9d722b72-77bc-4500-89f5-a13bfa49eba1"). InnerVolumeSpecName "kube-api-access-dg26h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.663342 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9d722b72-77bc-4500-89f5-a13bfa49eba1" (UID: "9d722b72-77bc-4500-89f5-a13bfa49eba1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.664606 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-inventory" (OuterVolumeSpecName: "inventory") pod "9d722b72-77bc-4500-89f5-a13bfa49eba1" (UID: "9d722b72-77bc-4500-89f5-a13bfa49eba1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.722421 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.722455 4957 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.722468 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg26h\" (UniqueName: \"kubernetes.io/projected/9d722b72-77bc-4500-89f5-a13bfa49eba1-kube-api-access-dg26h\") on node \"crc\" DevicePath \"\"" Nov 28 21:19:35 crc kubenswrapper[4957]: I1128 21:19:35.722477 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d722b72-77bc-4500-89f5-a13bfa49eba1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.116671 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" event={"ID":"9d722b72-77bc-4500-89f5-a13bfa49eba1","Type":"ContainerDied","Data":"0bc7a9bb9d6a531d6bdaf2aeaae4a2e2d7e02bdb53ad9b283798da6f0e481fd6"} Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.116742 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc7a9bb9d6a531d6bdaf2aeaae4a2e2d7e02bdb53ad9b283798da6f0e481fd6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.116819 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.201754 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6"] Nov 28 21:19:36 crc kubenswrapper[4957]: E1128 21:19:36.202310 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d722b72-77bc-4500-89f5-a13bfa49eba1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.202331 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d722b72-77bc-4500-89f5-a13bfa49eba1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.202574 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d722b72-77bc-4500-89f5-a13bfa49eba1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.203438 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.210466 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.210940 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.211029 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.211132 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.241287 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6"] Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.337621 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gw7q\" (UniqueName: \"kubernetes.io/projected/e706336e-29eb-4b07-b29c-bb080c8026be-kube-api-access-7gw7q\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.338636 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.338716 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.442273 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.442835 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.443372 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gw7q\" (UniqueName: \"kubernetes.io/projected/e706336e-29eb-4b07-b29c-bb080c8026be-kube-api-access-7gw7q\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.458581 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.459326 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.467254 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gw7q\" (UniqueName: \"kubernetes.io/projected/e706336e-29eb-4b07-b29c-bb080c8026be-kube-api-access-7gw7q\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:36 crc kubenswrapper[4957]: I1128 21:19:36.533224 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:19:37 crc kubenswrapper[4957]: I1128 21:19:37.101480 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6"] Nov 28 21:19:37 crc kubenswrapper[4957]: I1128 21:19:37.141853 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" event={"ID":"e706336e-29eb-4b07-b29c-bb080c8026be","Type":"ContainerStarted","Data":"29883f96aa99c250695758172313ac05600c4157b47feda1b33df6de87e3bf70"} Nov 28 21:19:38 crc kubenswrapper[4957]: I1128 21:19:38.155582 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" event={"ID":"e706336e-29eb-4b07-b29c-bb080c8026be","Type":"ContainerStarted","Data":"3ef05872cb4f0ce960673f8dd6016f11a89159ac4def37a6416ac5e80526f198"} Nov 28 21:19:38 crc kubenswrapper[4957]: I1128 21:19:38.180863 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" podStartSLOduration=1.6065907130000001 podStartE2EDuration="2.180845897s" podCreationTimestamp="2025-11-28 21:19:36 +0000 UTC" firstStartedPulling="2025-11-28 21:19:37.106820168 +0000 UTC m=+1816.575468077" lastFinishedPulling="2025-11-28 21:19:37.681075352 +0000 UTC m=+1817.149723261" observedRunningTime="2025-11-28 21:19:38.177140866 +0000 UTC m=+1817.645788775" watchObservedRunningTime="2025-11-28 21:19:38.180845897 +0000 UTC m=+1817.649493806" Nov 28 21:19:47 crc kubenswrapper[4957]: I1128 21:19:47.813789 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:19:47 crc kubenswrapper[4957]: E1128 21:19:47.814551 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:19:48 crc kubenswrapper[4957]: I1128 21:19:48.052732 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fp7zh"] Nov 28 21:19:48 crc kubenswrapper[4957]: I1128 21:19:48.064420 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fp7zh"] Nov 28 21:19:48 crc kubenswrapper[4957]: I1128 21:19:48.846703 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90bdf335-aae2-44c6-927e-f6d9225d5970" path="/var/lib/kubelet/pods/90bdf335-aae2-44c6-927e-f6d9225d5970/volumes" Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.043138 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a86c-account-create-update-q856p"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.059290 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fgcv6"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.080178 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a86c-account-create-update-q856p"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.096800 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3780-account-create-update-nzcdq"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.107251 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fgcv6"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.117898 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3780-account-create-update-nzcdq"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.129492 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-njhfl"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.140102 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8642b"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.151500 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-efd2-account-create-update-rxcxq"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.163910 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-njhfl"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.176171 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-e779-account-create-update-kqnth"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.187697 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8642b"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.198738 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-e779-account-create-update-kqnth"] Nov 28 21:19:49 crc kubenswrapper[4957]: I1128 21:19:49.210356 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-efd2-account-create-update-rxcxq"] Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.827782 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9a05c8-f27b-4636-a835-ceb3a38cf708" path="/var/lib/kubelet/pods/2e9a05c8-f27b-4636-a835-ceb3a38cf708/volumes" Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.829968 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7330b121-68e0-43b8-ba88-a8c6f08c878b" path="/var/lib/kubelet/pods/7330b121-68e0-43b8-ba88-a8c6f08c878b/volumes" Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.831016 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdf6984-caf0-430e-bb97-52e25019aa8f" path="/var/lib/kubelet/pods/7bdf6984-caf0-430e-bb97-52e25019aa8f/volumes" Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.832106 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337" path="/var/lib/kubelet/pods/a4a6f3c4-1c3e-459e-b7d7-9ec0fe0e1337/volumes" Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.833706 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b166cebe-60ab-4a64-843d-7f4c0586c788" path="/var/lib/kubelet/pods/b166cebe-60ab-4a64-843d-7f4c0586c788/volumes" Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.834530 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40378c2-5f58-4240-8b46-9b8503574a70" path="/var/lib/kubelet/pods/d40378c2-5f58-4240-8b46-9b8503574a70/volumes" Nov 28 21:19:50 crc kubenswrapper[4957]: I1128 21:19:50.835493 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9abc1a5-b97a-40ff-a510-ba9c9a5deabc" path="/var/lib/kubelet/pods/d9abc1a5-b97a-40ff-a510-ba9c9a5deabc/volumes" Nov 28 21:19:59 crc kubenswrapper[4957]: I1128 21:19:59.042411 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh"] Nov 28 21:19:59 crc kubenswrapper[4957]: I1128 21:19:59.064284 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-8468-account-create-update-sbddd"] Nov 28 21:19:59 crc kubenswrapper[4957]: I1128 21:19:59.076806 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-8468-account-create-update-sbddd"] Nov 28 21:19:59 crc kubenswrapper[4957]: I1128 21:19:59.091837 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-dl6fh"] Nov 28 21:19:59 crc kubenswrapper[4957]: I1128 21:19:59.813410 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:19:59 crc kubenswrapper[4957]: E1128 21:19:59.813928 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:20:00 crc kubenswrapper[4957]: I1128 21:20:00.880787 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1018b31e-7e63-4e56-9ec8-29bcf03aa787" path="/var/lib/kubelet/pods/1018b31e-7e63-4e56-9ec8-29bcf03aa787/volumes" Nov 28 21:20:00 crc kubenswrapper[4957]: I1128 21:20:00.893901 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5de5502-9325-48ca-8ced-377dd347c155" path="/var/lib/kubelet/pods/c5de5502-9325-48ca-8ced-377dd347c155/volumes" Nov 28 21:20:04 crc kubenswrapper[4957]: I1128 21:20:04.034231 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-ts4r7"] Nov 28 21:20:04 crc kubenswrapper[4957]: I1128 21:20:04.046906 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-ts4r7"] Nov 28 21:20:04 crc kubenswrapper[4957]: I1128 21:20:04.827509 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b34c44-6e04-4e3f-8147-9616e4003021" path="/var/lib/kubelet/pods/f9b34c44-6e04-4e3f-8147-9616e4003021/volumes" Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.053295 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4vmcg"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.067569 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8a06-account-create-update-r2k25"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.078157 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-48df-account-create-update-r4h6j"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.091376 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qwxh2"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.101218 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-65b7-account-create-update-g5twz"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.112417 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-48df-account-create-update-r4h6j"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.123437 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8a06-account-create-update-r2k25"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.134345 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d5btv"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.149344 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4vmcg"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.160138 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-de40-account-create-update-6c8ts"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.173688 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-65b7-account-create-update-g5twz"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.187187 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d5btv"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.200636 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qwxh2"] Nov 28 21:20:05 crc kubenswrapper[4957]: I1128 21:20:05.213876 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-de40-account-create-update-6c8ts"] Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.830380 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0f6f45-582e-4856-8bec-00097857b539" path="/var/lib/kubelet/pods/2b0f6f45-582e-4856-8bec-00097857b539/volumes" Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.832247 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dab3e4c-800d-4848-9ff2-c5aed4c6a820" path="/var/lib/kubelet/pods/7dab3e4c-800d-4848-9ff2-c5aed4c6a820/volumes" Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.834027 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84499c1d-3d44-4911-85b4-c1dafdb93b03" path="/var/lib/kubelet/pods/84499c1d-3d44-4911-85b4-c1dafdb93b03/volumes" Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.835334 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf99417-e04e-43cd-87d4-86b1bd8f33cc" path="/var/lib/kubelet/pods/9cf99417-e04e-43cd-87d4-86b1bd8f33cc/volumes" Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.836877 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74265c6-4eb6-45a1-aa83-b0656eed2247" path="/var/lib/kubelet/pods/a74265c6-4eb6-45a1-aa83-b0656eed2247/volumes" Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.837791 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2ddced4-cb68-46ae-a929-a528b32c5ed5" path="/var/lib/kubelet/pods/f2ddced4-cb68-46ae-a929-a528b32c5ed5/volumes" Nov 28 21:20:06 crc kubenswrapper[4957]: I1128 21:20:06.838692 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f5a642-e071-4830-b31e-7f0e8e7b73ef" path="/var/lib/kubelet/pods/f5f5a642-e071-4830-b31e-7f0e8e7b73ef/volumes" Nov 28 21:20:14 crc kubenswrapper[4957]: I1128 21:20:14.813443 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:20:14 crc kubenswrapper[4957]: E1128 21:20:14.814173 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.532569 4957 scope.go:117] "RemoveContainer" containerID="c951c0455cd250da085184e6da43e31229f49cacf7ffee5198c944b5ad414283" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.574426 4957 scope.go:117] "RemoveContainer" containerID="d335e7a0cac76ea31e3301e74e3cd633c66ed9cda6f282849a8114fd5fdd65a7" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.615109 4957 scope.go:117] "RemoveContainer" containerID="4ceca11700f34de2edb0bf530cef8577435fb04033483cc92bad2efb34bf5783" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.705768 4957 scope.go:117] "RemoveContainer" containerID="9f9794d91df57d2a0f5a03144d8d4fdf7ab07730bffbb5b76e1a431c7910f799" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.756825 4957 scope.go:117] "RemoveContainer" containerID="a207617a8722773beda4b1970512bf431eb8d8821a83bccc0c4b10d9cb9b5b45" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.807547 4957 scope.go:117] "RemoveContainer" containerID="36c7c81f241a776668d44474fcff4d14d1a6dd199236b784cb68c5fa39aababd" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.872457 4957 scope.go:117] "RemoveContainer" containerID="8a261ec577d947cf676a7f0c644fd6d45f6f6d6b683737cae5a1e2dc2c8d0f22" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.900899 4957 scope.go:117] "RemoveContainer" containerID="525a7e0ea4496bde619cbaf479dd245215cc56bc1cde1c64e5fb2f0de91a6b0f" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.919649 4957 scope.go:117] "RemoveContainer" containerID="e2d7036ada5bac93e3b4b9b51c213cb1c4118f52008ef9ff56447102af0ef019" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.944555 4957 scope.go:117] "RemoveContainer" containerID="131a055e9c035ea96eed3a1a0bcbb4a70b6c16ce5b2adfab6adc9f18bcbe2be9" Nov 28 21:20:25 crc kubenswrapper[4957]: I1128 21:20:25.995634 4957 scope.go:117] "RemoveContainer" containerID="ec15be84652fab2b6e1ee21c173cd8e4420571821d459eaefad3bc922dd19b74" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.026221 4957 scope.go:117] "RemoveContainer" containerID="f7d0bf9ccdda3457ec050dddee01c8d94f5dd4d38e426d5b51a5ca74ac27db08" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.053869 4957 scope.go:117] "RemoveContainer" containerID="af4f236da06302b031f51fab43a9f8d7c3f62acd97038edae8ad0cbd694171af" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.109034 4957 scope.go:117] "RemoveContainer" containerID="7a5902b9af319542c0c399752b44cfee09f0d9d8579aaa7bc95be2d84338a231" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.151787 4957 scope.go:117] "RemoveContainer" containerID="4c2cd7f0c88891fde60f93c5b42bdca6caa45529ff5eef345e436b6eda113ded" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.178010 4957 scope.go:117] "RemoveContainer" containerID="8026bfd48571ee9089a35f3dc2e1b5f4a90c9c75f117e08d3290e854d00201d0" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.200415 4957 scope.go:117] "RemoveContainer" containerID="8986c22e1ac6d67e7a3e69087cc610d29db36c6a606cf8eda08466ad7d3c025c" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.246783 4957 scope.go:117] "RemoveContainer" containerID="b558e70ce8e2b3cfc56a5f374134361512016b91e23031ef5c926fcabf100440" Nov 28 21:20:26 crc kubenswrapper[4957]: I1128 21:20:26.275472 4957 scope.go:117] "RemoveContainer" containerID="f2658daf04de09433dab00380155ff87c0074e7efa09909506f38e2b7eee1331" Nov 28 21:20:28 crc kubenswrapper[4957]: I1128 21:20:28.813720 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:20:28 crc kubenswrapper[4957]: E1128 21:20:28.814790 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:20:29 crc kubenswrapper[4957]: I1128 21:20:29.063333 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-l6jkj"] Nov 28 21:20:29 crc kubenswrapper[4957]: I1128 21:20:29.075430 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-l6jkj"] Nov 28 21:20:30 crc kubenswrapper[4957]: I1128 21:20:30.832404 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8aef833-7bf5-4ae4-9fc9-62e1bf24871f" path="/var/lib/kubelet/pods/a8aef833-7bf5-4ae4-9fc9-62e1bf24871f/volumes" Nov 28 21:20:38 crc kubenswrapper[4957]: I1128 21:20:38.034166 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rb757"] Nov 28 21:20:38 crc kubenswrapper[4957]: I1128 21:20:38.045949 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rb757"] Nov 28 21:20:38 crc kubenswrapper[4957]: I1128 21:20:38.841545 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8196ca60-081d-4a36-acaf-d7e019bf2b12" path="/var/lib/kubelet/pods/8196ca60-081d-4a36-acaf-d7e019bf2b12/volumes" Nov 28 21:20:43 crc kubenswrapper[4957]: I1128 21:20:43.812544 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:20:43 crc kubenswrapper[4957]: E1128 21:20:43.813292 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:20:56 crc kubenswrapper[4957]: I1128 21:20:56.813100 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:20:56 crc kubenswrapper[4957]: E1128 21:20:56.815095 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:21:08 crc kubenswrapper[4957]: I1128 21:21:08.049717 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tzbj8"] Nov 28 21:21:08 crc kubenswrapper[4957]: I1128 21:21:08.068826 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tzbj8"] Nov 28 21:21:08 crc kubenswrapper[4957]: I1128 21:21:08.835533 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8" path="/var/lib/kubelet/pods/736a7ff3-b1f7-4bcc-b6d1-ce6e96ccf7d8/volumes" Nov 28 21:21:10 crc kubenswrapper[4957]: I1128 21:21:10.824315 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:21:10 crc kubenswrapper[4957]: E1128 21:21:10.825223 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:21:15 crc kubenswrapper[4957]: I1128 21:21:15.044379 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v9dz9"] Nov 28 21:21:15 crc kubenswrapper[4957]: I1128 21:21:15.057029 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v9dz9"] Nov 28 21:21:16 crc kubenswrapper[4957]: I1128 21:21:16.825628 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eef89ff-3725-4c4b-8b08-1e1a6f6369cb" path="/var/lib/kubelet/pods/9eef89ff-3725-4c4b-8b08-1e1a6f6369cb/volumes" Nov 28 21:21:22 crc kubenswrapper[4957]: I1128 21:21:22.813689 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:21:22 crc kubenswrapper[4957]: E1128 21:21:22.814411 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.041365 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5gcld"] Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.052559 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5gcld"] Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.341658 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gt4jz"] Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.344418 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.358467 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gt4jz"] Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.451173 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-catalog-content\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.451286 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl7t\" (UniqueName: \"kubernetes.io/projected/c98e013f-d432-4f4b-8ca9-999861942a5a-kube-api-access-8jl7t\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.451516 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-utilities\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.542891 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnrcf"] Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.545982 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.553525 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-catalog-content\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.553604 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl7t\" (UniqueName: \"kubernetes.io/projected/c98e013f-d432-4f4b-8ca9-999861942a5a-kube-api-access-8jl7t\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.553830 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-utilities\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.554010 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-catalog-content\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.554381 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-utilities\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.554739 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnrcf"] Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.586170 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl7t\" (UniqueName: \"kubernetes.io/projected/c98e013f-d432-4f4b-8ca9-999861942a5a-kube-api-access-8jl7t\") pod \"certified-operators-gt4jz\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.657304 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-catalog-content\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.657637 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-utilities\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.657913 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qxw\" (UniqueName: \"kubernetes.io/projected/8c246b6e-73f0-40c3-a709-f683e2c7eddb-kube-api-access-s8qxw\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.711240 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.762674 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-utilities\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.763113 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qxw\" (UniqueName: \"kubernetes.io/projected/8c246b6e-73f0-40c3-a709-f683e2c7eddb-kube-api-access-s8qxw\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.763388 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-utilities\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.763608 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-catalog-content\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.764035 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-catalog-content\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.783034 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qxw\" (UniqueName: \"kubernetes.io/projected/8c246b6e-73f0-40c3-a709-f683e2c7eddb-kube-api-access-s8qxw\") pod \"redhat-operators-qnrcf\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:23 crc kubenswrapper[4957]: I1128 21:21:23.868705 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:24 crc kubenswrapper[4957]: I1128 21:21:24.245566 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gt4jz"] Nov 28 21:21:24 crc kubenswrapper[4957]: I1128 21:21:24.372911 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnrcf"] Nov 28 21:21:24 crc kubenswrapper[4957]: I1128 21:21:24.827502 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1" path="/var/lib/kubelet/pods/1f3672ea-b9dd-4253-8a05-7bdff6a0d9f1/volumes" Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.429852 4957 generic.go:334] "Generic (PLEG): container finished" podID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerID="bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a" exitCode=0 Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.429902 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerDied","Data":"bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a"} Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.429951 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerStarted","Data":"0fae6bb31d18ad8eb4384ec6592b31ea1144fec4026da9c520769ed5478f697e"} Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.432359 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.432517 4957 generic.go:334] "Generic (PLEG): container finished" podID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerID="81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb" exitCode=0 Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.432561 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerDied","Data":"81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb"} Nov 28 21:21:25 crc kubenswrapper[4957]: I1128 21:21:25.432588 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerStarted","Data":"427b5ab8476155c4bb41cd0ff196df9b628f16ae75e09f283ff8527193e30552"} Nov 28 21:21:26 crc kubenswrapper[4957]: I1128 21:21:26.452931 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerStarted","Data":"d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873"} Nov 28 21:21:26 crc kubenswrapper[4957]: I1128 21:21:26.643691 4957 scope.go:117] "RemoveContainer" containerID="196782afafab1ca6b58ee64f7754da79b615278bc94e58431d1eb5db52185e7f" Nov 28 21:21:26 crc kubenswrapper[4957]: I1128 21:21:26.685382 4957 scope.go:117] "RemoveContainer" containerID="6dc4f482b8a4e35bee03479d90c9119635eec0060008b7291021d36f1805f46b" Nov 28 21:21:26 crc kubenswrapper[4957]: I1128 21:21:26.738488 4957 scope.go:117] "RemoveContainer" containerID="b4eec643e13f90868dfe11ab06c8c7c77004d3dd8d5555762d606ab2fb302d38" Nov 28 21:21:26 crc kubenswrapper[4957]: I1128 21:21:26.871769 4957 scope.go:117] "RemoveContainer" containerID="dc130955cb6ca5d8b6138209e058c659eb5184f2dffb48e7a89e2c5adc29cad6" Nov 28 21:21:26 crc kubenswrapper[4957]: I1128 21:21:26.963679 4957 scope.go:117] "RemoveContainer" containerID="ea6d41b09b67cc6934e4ec92ac366238ba3a251e032f17bbbefba63fff87adb1" Nov 28 21:21:27 crc kubenswrapper[4957]: I1128 21:21:27.470563 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerStarted","Data":"92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742"} Nov 28 21:21:28 crc kubenswrapper[4957]: I1128 21:21:28.482725 4957 generic.go:334] "Generic (PLEG): container finished" podID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerID="d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873" exitCode=0 Nov 28 21:21:28 crc kubenswrapper[4957]: I1128 21:21:28.482822 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerDied","Data":"d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873"} Nov 28 21:21:30 crc kubenswrapper[4957]: I1128 21:21:30.027098 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wnxk5"] Nov 28 21:21:30 crc kubenswrapper[4957]: I1128 21:21:30.038138 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wnxk5"] Nov 28 21:21:30 crc kubenswrapper[4957]: I1128 21:21:30.505987 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerStarted","Data":"b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3"} Nov 28 21:21:30 crc kubenswrapper[4957]: I1128 21:21:30.541879 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gt4jz" podStartSLOduration=3.6265621919999997 podStartE2EDuration="7.541855312s" podCreationTimestamp="2025-11-28 21:21:23 +0000 UTC" firstStartedPulling="2025-11-28 21:21:25.434187516 +0000 UTC m=+1924.902835425" lastFinishedPulling="2025-11-28 21:21:29.349480636 +0000 UTC m=+1928.818128545" observedRunningTime="2025-11-28 21:21:30.526475402 +0000 UTC m=+1929.995123321" watchObservedRunningTime="2025-11-28 21:21:30.541855312 +0000 UTC m=+1930.010503241" Nov 28 21:21:30 crc kubenswrapper[4957]: I1128 21:21:30.826034 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1717eaea-018a-4e9d-af82-ce7b3fb3868e" path="/var/lib/kubelet/pods/1717eaea-018a-4e9d-af82-ce7b3fb3868e/volumes" Nov 28 21:21:31 crc kubenswrapper[4957]: I1128 21:21:31.518294 4957 generic.go:334] "Generic (PLEG): container finished" podID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerID="92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742" exitCode=0 Nov 28 21:21:31 crc kubenswrapper[4957]: I1128 21:21:31.518337 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerDied","Data":"92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742"} Nov 28 21:21:32 crc kubenswrapper[4957]: I1128 21:21:32.529275 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerStarted","Data":"f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3"} Nov 28 21:21:32 crc kubenswrapper[4957]: I1128 21:21:32.545987 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnrcf" podStartSLOduration=2.88154957 podStartE2EDuration="9.545955921s" podCreationTimestamp="2025-11-28 21:21:23 +0000 UTC" firstStartedPulling="2025-11-28 21:21:25.432086935 +0000 UTC m=+1924.900734844" lastFinishedPulling="2025-11-28 21:21:32.096493286 +0000 UTC m=+1931.565141195" observedRunningTime="2025-11-28 21:21:32.545563931 +0000 UTC m=+1932.014211840" watchObservedRunningTime="2025-11-28 21:21:32.545955921 +0000 UTC m=+1932.014603830" Nov 28 21:21:33 crc kubenswrapper[4957]: I1128 21:21:33.711962 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:33 crc kubenswrapper[4957]: I1128 21:21:33.712028 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:33 crc kubenswrapper[4957]: I1128 21:21:33.869813 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:33 crc kubenswrapper[4957]: I1128 21:21:33.871614 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:34 crc kubenswrapper[4957]: I1128 21:21:34.031590 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lm4b6"] Nov 28 21:21:34 crc kubenswrapper[4957]: I1128 21:21:34.042156 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lm4b6"] Nov 28 21:21:34 crc kubenswrapper[4957]: I1128 21:21:34.764439 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gt4jz" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="registry-server" probeResult="failure" output=< Nov 28 21:21:34 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 21:21:34 crc kubenswrapper[4957]: > Nov 28 21:21:34 crc kubenswrapper[4957]: I1128 21:21:34.826608 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8d4ba5-28bb-41f2-8158-04d673e8ee19" path="/var/lib/kubelet/pods/eb8d4ba5-28bb-41f2-8158-04d673e8ee19/volumes" Nov 28 21:21:34 crc kubenswrapper[4957]: I1128 21:21:34.926945 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnrcf" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="registry-server" probeResult="failure" output=< Nov 28 21:21:34 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 21:21:34 crc kubenswrapper[4957]: > Nov 28 21:21:35 crc kubenswrapper[4957]: I1128 21:21:35.813546 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:21:35 crc kubenswrapper[4957]: E1128 21:21:35.814130 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:21:43 crc kubenswrapper[4957]: I1128 21:21:43.775828 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:43 crc kubenswrapper[4957]: I1128 21:21:43.847993 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:44 crc kubenswrapper[4957]: I1128 21:21:44.013465 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gt4jz"] Nov 28 21:21:44 crc kubenswrapper[4957]: I1128 21:21:44.918541 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnrcf" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="registry-server" probeResult="failure" output=< Nov 28 21:21:44 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 21:21:44 crc kubenswrapper[4957]: > Nov 28 21:21:45 crc kubenswrapper[4957]: I1128 21:21:45.651873 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gt4jz" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="registry-server" containerID="cri-o://b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3" gracePeriod=2 Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.097135 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.222052 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-utilities\") pod \"c98e013f-d432-4f4b-8ca9-999861942a5a\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.222175 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jl7t\" (UniqueName: \"kubernetes.io/projected/c98e013f-d432-4f4b-8ca9-999861942a5a-kube-api-access-8jl7t\") pod \"c98e013f-d432-4f4b-8ca9-999861942a5a\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.222255 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-catalog-content\") pod \"c98e013f-d432-4f4b-8ca9-999861942a5a\" (UID: \"c98e013f-d432-4f4b-8ca9-999861942a5a\") " Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.223315 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-utilities" (OuterVolumeSpecName: "utilities") pod "c98e013f-d432-4f4b-8ca9-999861942a5a" (UID: "c98e013f-d432-4f4b-8ca9-999861942a5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.228818 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98e013f-d432-4f4b-8ca9-999861942a5a-kube-api-access-8jl7t" (OuterVolumeSpecName: "kube-api-access-8jl7t") pod "c98e013f-d432-4f4b-8ca9-999861942a5a" (UID: "c98e013f-d432-4f4b-8ca9-999861942a5a"). InnerVolumeSpecName "kube-api-access-8jl7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.271119 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c98e013f-d432-4f4b-8ca9-999861942a5a" (UID: "c98e013f-d432-4f4b-8ca9-999861942a5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.325124 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.325174 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jl7t\" (UniqueName: \"kubernetes.io/projected/c98e013f-d432-4f4b-8ca9-999861942a5a-kube-api-access-8jl7t\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.325189 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98e013f-d432-4f4b-8ca9-999861942a5a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.676008 4957 generic.go:334] "Generic (PLEG): container finished" podID="e706336e-29eb-4b07-b29c-bb080c8026be" containerID="3ef05872cb4f0ce960673f8dd6016f11a89159ac4def37a6416ac5e80526f198" exitCode=0 Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.676137 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" event={"ID":"e706336e-29eb-4b07-b29c-bb080c8026be","Type":"ContainerDied","Data":"3ef05872cb4f0ce960673f8dd6016f11a89159ac4def37a6416ac5e80526f198"} Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.680882 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gt4jz" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.681335 4957 generic.go:334] "Generic (PLEG): container finished" podID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerID="b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3" exitCode=0 Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.681466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerDied","Data":"b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3"} Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.681500 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gt4jz" event={"ID":"c98e013f-d432-4f4b-8ca9-999861942a5a","Type":"ContainerDied","Data":"427b5ab8476155c4bb41cd0ff196df9b628f16ae75e09f283ff8527193e30552"} Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.681518 4957 scope.go:117] "RemoveContainer" containerID="b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.718405 4957 scope.go:117] "RemoveContainer" containerID="d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.729268 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gt4jz"] Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.739467 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gt4jz"] Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.748261 4957 scope.go:117] "RemoveContainer" containerID="81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.804324 4957 scope.go:117] "RemoveContainer" containerID="b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3" Nov 28 21:21:46 crc kubenswrapper[4957]: E1128 21:21:46.804855 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3\": container with ID starting with b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3 not found: ID does not exist" containerID="b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.804890 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3"} err="failed to get container status \"b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3\": rpc error: code = NotFound desc = could not find container \"b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3\": container with ID starting with b1dfc684be4129c4c0db68006e554048b61e0d2dd6755c1c253c44080230ecb3 not found: ID does not exist" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.804911 4957 scope.go:117] "RemoveContainer" containerID="d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873" Nov 28 21:21:46 crc kubenswrapper[4957]: E1128 21:21:46.805297 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873\": container with ID starting with d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873 not found: ID does not exist" containerID="d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.805341 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873"} err="failed to get container status \"d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873\": rpc error: code = NotFound desc = could not find container \"d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873\": container with ID starting with d202e59ddd8a04e46384e079335c8573e4e37391c2ab33742ea7a7d5f817b873 not found: ID does not exist" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.805371 4957 scope.go:117] "RemoveContainer" containerID="81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb" Nov 28 21:21:46 crc kubenswrapper[4957]: E1128 21:21:46.805640 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb\": container with ID starting with 81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb not found: ID does not exist" containerID="81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.805665 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb"} err="failed to get container status \"81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb\": rpc error: code = NotFound desc = could not find container \"81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb\": container with ID starting with 81487be08c85702223b6105540277db5ffc5d45b64b2c87b5cc3b91865b261eb not found: ID does not exist" Nov 28 21:21:46 crc kubenswrapper[4957]: I1128 21:21:46.824789 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" path="/var/lib/kubelet/pods/c98e013f-d432-4f4b-8ca9-999861942a5a/volumes" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.152195 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.267354 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-ssh-key\") pod \"e706336e-29eb-4b07-b29c-bb080c8026be\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.267743 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-inventory\") pod \"e706336e-29eb-4b07-b29c-bb080c8026be\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.267830 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gw7q\" (UniqueName: \"kubernetes.io/projected/e706336e-29eb-4b07-b29c-bb080c8026be-kube-api-access-7gw7q\") pod \"e706336e-29eb-4b07-b29c-bb080c8026be\" (UID: \"e706336e-29eb-4b07-b29c-bb080c8026be\") " Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.274466 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e706336e-29eb-4b07-b29c-bb080c8026be-kube-api-access-7gw7q" (OuterVolumeSpecName: "kube-api-access-7gw7q") pod "e706336e-29eb-4b07-b29c-bb080c8026be" (UID: "e706336e-29eb-4b07-b29c-bb080c8026be"). InnerVolumeSpecName "kube-api-access-7gw7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.300074 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e706336e-29eb-4b07-b29c-bb080c8026be" (UID: "e706336e-29eb-4b07-b29c-bb080c8026be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.310472 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-inventory" (OuterVolumeSpecName: "inventory") pod "e706336e-29eb-4b07-b29c-bb080c8026be" (UID: "e706336e-29eb-4b07-b29c-bb080c8026be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.370959 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.370998 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gw7q\" (UniqueName: \"kubernetes.io/projected/e706336e-29eb-4b07-b29c-bb080c8026be-kube-api-access-7gw7q\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.371013 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e706336e-29eb-4b07-b29c-bb080c8026be-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.707940 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" event={"ID":"e706336e-29eb-4b07-b29c-bb080c8026be","Type":"ContainerDied","Data":"29883f96aa99c250695758172313ac05600c4157b47feda1b33df6de87e3bf70"} Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.707986 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29883f96aa99c250695758172313ac05600c4157b47feda1b33df6de87e3bf70" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.708059 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.801949 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9"] Nov 28 21:21:48 crc kubenswrapper[4957]: E1128 21:21:48.802680 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="extract-content" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.802780 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="extract-content" Nov 28 21:21:48 crc kubenswrapper[4957]: E1128 21:21:48.802855 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e706336e-29eb-4b07-b29c-bb080c8026be" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.802917 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e706336e-29eb-4b07-b29c-bb080c8026be" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 21:21:48 crc kubenswrapper[4957]: E1128 21:21:48.802981 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="registry-server" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.803071 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="registry-server" Nov 28 21:21:48 crc kubenswrapper[4957]: E1128 21:21:48.803158 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="extract-utilities" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.803211 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="extract-utilities" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.803509 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98e013f-d432-4f4b-8ca9-999861942a5a" containerName="registry-server" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.803632 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e706336e-29eb-4b07-b29c-bb080c8026be" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.804535 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.821287 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.822197 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.822578 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.822743 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.840479 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9"] Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.882653 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.882738 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7jbm\" (UniqueName: \"kubernetes.io/projected/34a11afa-d26a-4036-8d4e-6dcd96bc3036-kube-api-access-s7jbm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.882868 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.985293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.985441 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.985506 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7jbm\" (UniqueName: \"kubernetes.io/projected/34a11afa-d26a-4036-8d4e-6dcd96bc3036-kube-api-access-s7jbm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:48 crc kubenswrapper[4957]: I1128 21:21:48.990055 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:49 crc kubenswrapper[4957]: I1128 21:21:49.002946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:49 crc kubenswrapper[4957]: I1128 21:21:49.006279 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7jbm\" (UniqueName: \"kubernetes.io/projected/34a11afa-d26a-4036-8d4e-6dcd96bc3036-kube-api-access-s7jbm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:49 crc kubenswrapper[4957]: I1128 21:21:49.152850 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:21:49 crc kubenswrapper[4957]: I1128 21:21:49.720199 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9"] Nov 28 21:21:49 crc kubenswrapper[4957]: I1128 21:21:49.814024 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:21:49 crc kubenswrapper[4957]: E1128 21:21:49.814313 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:21:50 crc kubenswrapper[4957]: I1128 21:21:50.737204 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" event={"ID":"34a11afa-d26a-4036-8d4e-6dcd96bc3036","Type":"ContainerStarted","Data":"394948b83b4b00dd761be2d184b97099fb1cda9edea74158a2a199c8186afd33"} Nov 28 21:21:50 crc kubenswrapper[4957]: I1128 21:21:50.738900 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" event={"ID":"34a11afa-d26a-4036-8d4e-6dcd96bc3036","Type":"ContainerStarted","Data":"0022a1e460cf3384e17dd850e47d15581d327f379491fcf9327bb6dfaa51c975"} Nov 28 21:21:50 crc kubenswrapper[4957]: I1128 21:21:50.754850 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" podStartSLOduration=2.165053447 podStartE2EDuration="2.754832832s" podCreationTimestamp="2025-11-28 21:21:48 +0000 UTC" firstStartedPulling="2025-11-28 21:21:49.72074614 +0000 UTC m=+1949.189394049" lastFinishedPulling="2025-11-28 21:21:50.310525525 +0000 UTC m=+1949.779173434" observedRunningTime="2025-11-28 21:21:50.75112537 +0000 UTC m=+1950.219773279" watchObservedRunningTime="2025-11-28 21:21:50.754832832 +0000 UTC m=+1950.223480741" Nov 28 21:21:53 crc kubenswrapper[4957]: I1128 21:21:53.923759 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:53 crc kubenswrapper[4957]: I1128 21:21:53.986982 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:54 crc kubenswrapper[4957]: I1128 21:21:54.542425 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnrcf"] Nov 28 21:21:55 crc kubenswrapper[4957]: I1128 21:21:55.787875 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnrcf" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="registry-server" containerID="cri-o://f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3" gracePeriod=2 Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.447138 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.469214 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-utilities\") pod \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.469978 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-catalog-content\") pod \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.470200 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8qxw\" (UniqueName: \"kubernetes.io/projected/8c246b6e-73f0-40c3-a709-f683e2c7eddb-kube-api-access-s8qxw\") pod \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\" (UID: \"8c246b6e-73f0-40c3-a709-f683e2c7eddb\") " Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.470009 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-utilities" (OuterVolumeSpecName: "utilities") pod "8c246b6e-73f0-40c3-a709-f683e2c7eddb" (UID: "8c246b6e-73f0-40c3-a709-f683e2c7eddb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.482990 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c246b6e-73f0-40c3-a709-f683e2c7eddb-kube-api-access-s8qxw" (OuterVolumeSpecName: "kube-api-access-s8qxw") pod "8c246b6e-73f0-40c3-a709-f683e2c7eddb" (UID: "8c246b6e-73f0-40c3-a709-f683e2c7eddb"). InnerVolumeSpecName "kube-api-access-s8qxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.574572 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8qxw\" (UniqueName: \"kubernetes.io/projected/8c246b6e-73f0-40c3-a709-f683e2c7eddb-kube-api-access-s8qxw\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.574604 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.598639 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c246b6e-73f0-40c3-a709-f683e2c7eddb" (UID: "8c246b6e-73f0-40c3-a709-f683e2c7eddb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.676632 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c246b6e-73f0-40c3-a709-f683e2c7eddb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.798332 4957 generic.go:334] "Generic (PLEG): container finished" podID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerID="f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3" exitCode=0 Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.798383 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnrcf" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.798377 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerDied","Data":"f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3"} Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.799907 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnrcf" event={"ID":"8c246b6e-73f0-40c3-a709-f683e2c7eddb","Type":"ContainerDied","Data":"0fae6bb31d18ad8eb4384ec6592b31ea1144fec4026da9c520769ed5478f697e"} Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.799980 4957 scope.go:117] "RemoveContainer" containerID="f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.836284 4957 scope.go:117] "RemoveContainer" containerID="92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.853368 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnrcf"] Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.866387 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnrcf"] Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.868417 4957 scope.go:117] "RemoveContainer" containerID="bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.922847 4957 scope.go:117] "RemoveContainer" containerID="f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3" Nov 28 21:21:56 crc kubenswrapper[4957]: E1128 21:21:56.923400 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3\": container with ID starting with f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3 not found: ID does not exist" containerID="f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.923435 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3"} err="failed to get container status \"f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3\": rpc error: code = NotFound desc = could not find container \"f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3\": container with ID starting with f58b0875def90c5b306e0cfce64ce71016ed7ef9e6256d63c656035506a329f3 not found: ID does not exist" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.923566 4957 scope.go:117] "RemoveContainer" containerID="92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742" Nov 28 21:21:56 crc kubenswrapper[4957]: E1128 21:21:56.924071 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742\": container with ID starting with 92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742 not found: ID does not exist" containerID="92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.924095 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742"} err="failed to get container status \"92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742\": rpc error: code = NotFound desc = could not find container \"92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742\": container with ID starting with 92f7d42f89fe00312e6a5bf566c55e1865e95c31550a0abe4d4d0a1de6d16742 not found: ID does not exist" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.924108 4957 scope.go:117] "RemoveContainer" containerID="bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a" Nov 28 21:21:56 crc kubenswrapper[4957]: E1128 21:21:56.924475 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a\": container with ID starting with bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a not found: ID does not exist" containerID="bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a" Nov 28 21:21:56 crc kubenswrapper[4957]: I1128 21:21:56.924491 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a"} err="failed to get container status \"bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a\": rpc error: code = NotFound desc = could not find container \"bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a\": container with ID starting with bffa30d42b5ccbae7ba72bd8a27f45ac506ab039becc6671b31e86de29478a8a not found: ID does not exist" Nov 28 21:21:58 crc kubenswrapper[4957]: I1128 21:21:58.835113 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" path="/var/lib/kubelet/pods/8c246b6e-73f0-40c3-a709-f683e2c7eddb/volumes" Nov 28 21:22:02 crc kubenswrapper[4957]: I1128 21:22:02.812934 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:22:02 crc kubenswrapper[4957]: E1128 21:22:02.813554 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:22:15 crc kubenswrapper[4957]: I1128 21:22:15.813430 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:22:15 crc kubenswrapper[4957]: E1128 21:22:15.814265 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:22:16 crc kubenswrapper[4957]: I1128 21:22:16.063674 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tfscq"] Nov 28 21:22:16 crc kubenswrapper[4957]: I1128 21:22:16.074221 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tfscq"] Nov 28 21:22:16 crc kubenswrapper[4957]: I1128 21:22:16.828811 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ea8814-97e9-483f-b18b-152cf55db66e" path="/var/lib/kubelet/pods/d9ea8814-97e9-483f-b18b-152cf55db66e/volumes" Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.034868 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5199-account-create-update-clxd9"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.046824 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m5vbz"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.066483 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9e02-account-create-update-b59kd"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.084431 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5199-account-create-update-clxd9"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.093907 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-28e1-account-create-update-r5nnk"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.102252 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m5vbz"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.116982 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-sz4nq"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.130821 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-sz4nq"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.143849 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9e02-account-create-update-b59kd"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.154971 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-28e1-account-create-update-r5nnk"] Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.827408 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28598bcc-eeb8-4f16-a9f3-504804e6dd44" path="/var/lib/kubelet/pods/28598bcc-eeb8-4f16-a9f3-504804e6dd44/volumes" Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.828603 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="689e0a10-8474-4f07-b2c3-35cb230e2803" path="/var/lib/kubelet/pods/689e0a10-8474-4f07-b2c3-35cb230e2803/volumes" Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.830745 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76" path="/var/lib/kubelet/pods/6c4efc2e-f4e9-483b-ba4f-008ae1d2ec76/volumes" Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.832027 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7" path="/var/lib/kubelet/pods/b31019da-f89f-4a3b-9e4e-5f68e1fdb2e7/volumes" Nov 28 21:22:18 crc kubenswrapper[4957]: I1128 21:22:18.832935 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26dceba-3629-4241-84b1-72015dff8552" path="/var/lib/kubelet/pods/c26dceba-3629-4241-84b1-72015dff8552/volumes" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.490577 4957 scope.go:117] "RemoveContainer" containerID="b828127f85e721b2e997d084634e57cb90a4ea37e44735840cc2293cb6527e33" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.529122 4957 scope.go:117] "RemoveContainer" containerID="d5f1eeb4f60198f011e8a5283bf531ae63010c246587dca64c1f8be9d846d07e" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.585328 4957 scope.go:117] "RemoveContainer" containerID="40445a3e8247a16e215abebcf0ba9c57cb3ce924c3e64ea2fe3db36c49e5eaf3" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.646046 4957 scope.go:117] "RemoveContainer" containerID="1f952ee909f8bb6639a0e21a3735444bad908e104d4d33503bb4ec5fc5f84322" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.691762 4957 scope.go:117] "RemoveContainer" containerID="aba8946376771fb205491deba71aa2905b4f707705a4bd4701e265e22036c32c" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.738511 4957 scope.go:117] "RemoveContainer" containerID="6b2caa21d19f776b9c67d7e563c8117825da567de8aa350a872420c27b7694fb" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.797007 4957 scope.go:117] "RemoveContainer" containerID="a1f1280c6d4d31a4f3bbbad3616c9ae987c0b981c04b4bac1d6f2c581e298576" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.814255 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:22:27 crc kubenswrapper[4957]: E1128 21:22:27.814943 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:22:27 crc kubenswrapper[4957]: I1128 21:22:27.879005 4957 scope.go:117] "RemoveContainer" containerID="10dd1628af95e5c2b6a1ebfcfafff0dc237b61fddb3bfca10fecefa8e6d6fc08" Nov 28 21:22:42 crc kubenswrapper[4957]: I1128 21:22:42.814706 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:22:43 crc kubenswrapper[4957]: I1128 21:22:43.402953 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"b3001db91bc32721628253d33092eb370a1a801675f87f706e745629d542dcc7"} Nov 28 21:22:55 crc kubenswrapper[4957]: I1128 21:22:55.042939 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dnz8n"] Nov 28 21:22:55 crc kubenswrapper[4957]: I1128 21:22:55.054869 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dnz8n"] Nov 28 21:22:56 crc kubenswrapper[4957]: I1128 21:22:56.826829 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb8e30f-337f-4de0-8508-486479b41e97" path="/var/lib/kubelet/pods/efb8e30f-337f-4de0-8508-486479b41e97/volumes" Nov 28 21:23:05 crc kubenswrapper[4957]: I1128 21:23:05.647824 4957 generic.go:334] "Generic (PLEG): container finished" podID="34a11afa-d26a-4036-8d4e-6dcd96bc3036" containerID="394948b83b4b00dd761be2d184b97099fb1cda9edea74158a2a199c8186afd33" exitCode=0 Nov 28 21:23:05 crc kubenswrapper[4957]: I1128 21:23:05.647940 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" event={"ID":"34a11afa-d26a-4036-8d4e-6dcd96bc3036","Type":"ContainerDied","Data":"394948b83b4b00dd761be2d184b97099fb1cda9edea74158a2a199c8186afd33"} Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.118648 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.198518 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-ssh-key\") pod \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.198762 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-inventory\") pod \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.198823 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7jbm\" (UniqueName: \"kubernetes.io/projected/34a11afa-d26a-4036-8d4e-6dcd96bc3036-kube-api-access-s7jbm\") pod \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\" (UID: \"34a11afa-d26a-4036-8d4e-6dcd96bc3036\") " Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.204350 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a11afa-d26a-4036-8d4e-6dcd96bc3036-kube-api-access-s7jbm" (OuterVolumeSpecName: "kube-api-access-s7jbm") pod "34a11afa-d26a-4036-8d4e-6dcd96bc3036" (UID: "34a11afa-d26a-4036-8d4e-6dcd96bc3036"). InnerVolumeSpecName "kube-api-access-s7jbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.228715 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-inventory" (OuterVolumeSpecName: "inventory") pod "34a11afa-d26a-4036-8d4e-6dcd96bc3036" (UID: "34a11afa-d26a-4036-8d4e-6dcd96bc3036"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.230382 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34a11afa-d26a-4036-8d4e-6dcd96bc3036" (UID: "34a11afa-d26a-4036-8d4e-6dcd96bc3036"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.303447 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.303746 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34a11afa-d26a-4036-8d4e-6dcd96bc3036-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.303758 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7jbm\" (UniqueName: \"kubernetes.io/projected/34a11afa-d26a-4036-8d4e-6dcd96bc3036-kube-api-access-s7jbm\") on node \"crc\" DevicePath \"\"" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.670842 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" event={"ID":"34a11afa-d26a-4036-8d4e-6dcd96bc3036","Type":"ContainerDied","Data":"0022a1e460cf3384e17dd850e47d15581d327f379491fcf9327bb6dfaa51c975"} Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.670885 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0022a1e460cf3384e17dd850e47d15581d327f379491fcf9327bb6dfaa51c975" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.670884 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.799089 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh"] Nov 28 21:23:07 crc kubenswrapper[4957]: E1128 21:23:07.800846 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="extract-utilities" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.800885 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="extract-utilities" Nov 28 21:23:07 crc kubenswrapper[4957]: E1128 21:23:07.801003 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a11afa-d26a-4036-8d4e-6dcd96bc3036" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.801026 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a11afa-d26a-4036-8d4e-6dcd96bc3036" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 21:23:07 crc kubenswrapper[4957]: E1128 21:23:07.801089 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="registry-server" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.801104 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="registry-server" Nov 28 21:23:07 crc kubenswrapper[4957]: E1128 21:23:07.801154 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="extract-content" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.801168 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="extract-content" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.802163 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c246b6e-73f0-40c3-a709-f683e2c7eddb" containerName="registry-server" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.802508 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a11afa-d26a-4036-8d4e-6dcd96bc3036" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.805304 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.812472 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.816325 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.816607 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.817498 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.818299 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.819335 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7gd\" (UniqueName: \"kubernetes.io/projected/c9b89b14-7e55-48b4-bbd9-5c67ed879847-kube-api-access-vn7gd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.819512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.827582 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh"] Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.920679 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.920784 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.920895 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7gd\" (UniqueName: \"kubernetes.io/projected/c9b89b14-7e55-48b4-bbd9-5c67ed879847-kube-api-access-vn7gd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.926244 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.926628 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:07 crc kubenswrapper[4957]: I1128 21:23:07.937559 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7gd\" (UniqueName: \"kubernetes.io/projected/c9b89b14-7e55-48b4-bbd9-5c67ed879847-kube-api-access-vn7gd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:08 crc kubenswrapper[4957]: I1128 21:23:08.132111 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:08 crc kubenswrapper[4957]: I1128 21:23:08.498751 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh"] Nov 28 21:23:08 crc kubenswrapper[4957]: I1128 21:23:08.680853 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" event={"ID":"c9b89b14-7e55-48b4-bbd9-5c67ed879847","Type":"ContainerStarted","Data":"e5e89f9c7d7a62b3a267eda863e927b1c79ec3af81b5bcb7be87f643b44e4086"} Nov 28 21:23:09 crc kubenswrapper[4957]: I1128 21:23:09.691097 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" event={"ID":"c9b89b14-7e55-48b4-bbd9-5c67ed879847","Type":"ContainerStarted","Data":"990f462072655d49bc99a12bc118c01dc2b0df9dd4131b6326a6a3b437964eed"} Nov 28 21:23:09 crc kubenswrapper[4957]: I1128 21:23:09.712043 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" podStartSLOduration=2.249445698 podStartE2EDuration="2.712026347s" podCreationTimestamp="2025-11-28 21:23:07 +0000 UTC" firstStartedPulling="2025-11-28 21:23:08.498775085 +0000 UTC m=+2027.967422994" lastFinishedPulling="2025-11-28 21:23:08.961355734 +0000 UTC m=+2028.430003643" observedRunningTime="2025-11-28 21:23:09.705085695 +0000 UTC m=+2029.173733604" watchObservedRunningTime="2025-11-28 21:23:09.712026347 +0000 UTC m=+2029.180674256" Nov 28 21:23:14 crc kubenswrapper[4957]: I1128 21:23:14.741398 4957 generic.go:334] "Generic (PLEG): container finished" podID="c9b89b14-7e55-48b4-bbd9-5c67ed879847" containerID="990f462072655d49bc99a12bc118c01dc2b0df9dd4131b6326a6a3b437964eed" exitCode=0 Nov 28 21:23:14 crc kubenswrapper[4957]: I1128 21:23:14.741514 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" event={"ID":"c9b89b14-7e55-48b4-bbd9-5c67ed879847","Type":"ContainerDied","Data":"990f462072655d49bc99a12bc118c01dc2b0df9dd4131b6326a6a3b437964eed"} Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.233986 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.416975 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-ssh-key\") pod \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.417488 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn7gd\" (UniqueName: \"kubernetes.io/projected/c9b89b14-7e55-48b4-bbd9-5c67ed879847-kube-api-access-vn7gd\") pod \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.417611 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-inventory\") pod \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\" (UID: \"c9b89b14-7e55-48b4-bbd9-5c67ed879847\") " Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.430619 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b89b14-7e55-48b4-bbd9-5c67ed879847-kube-api-access-vn7gd" (OuterVolumeSpecName: "kube-api-access-vn7gd") pod "c9b89b14-7e55-48b4-bbd9-5c67ed879847" (UID: "c9b89b14-7e55-48b4-bbd9-5c67ed879847"). InnerVolumeSpecName "kube-api-access-vn7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.449354 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c9b89b14-7e55-48b4-bbd9-5c67ed879847" (UID: "c9b89b14-7e55-48b4-bbd9-5c67ed879847"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.453082 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-inventory" (OuterVolumeSpecName: "inventory") pod "c9b89b14-7e55-48b4-bbd9-5c67ed879847" (UID: "c9b89b14-7e55-48b4-bbd9-5c67ed879847"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.520491 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.520518 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9b89b14-7e55-48b4-bbd9-5c67ed879847-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.520530 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn7gd\" (UniqueName: \"kubernetes.io/projected/c9b89b14-7e55-48b4-bbd9-5c67ed879847-kube-api-access-vn7gd\") on node \"crc\" DevicePath \"\"" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.764025 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" event={"ID":"c9b89b14-7e55-48b4-bbd9-5c67ed879847","Type":"ContainerDied","Data":"e5e89f9c7d7a62b3a267eda863e927b1c79ec3af81b5bcb7be87f643b44e4086"} Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.764300 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e89f9c7d7a62b3a267eda863e927b1c79ec3af81b5bcb7be87f643b44e4086" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.764087 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.825737 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm"] Nov 28 21:23:16 crc kubenswrapper[4957]: E1128 21:23:16.826301 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b89b14-7e55-48b4-bbd9-5c67ed879847" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.826391 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b89b14-7e55-48b4-bbd9-5c67ed879847" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.826692 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b89b14-7e55-48b4-bbd9-5c67ed879847" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.827569 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.830437 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.830438 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm"] Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.830441 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.830591 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.838538 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.928282 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z77f\" (UniqueName: \"kubernetes.io/projected/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-kube-api-access-5z77f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.928375 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:16 crc kubenswrapper[4957]: I1128 21:23:16.928509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.031187 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z77f\" (UniqueName: \"kubernetes.io/projected/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-kube-api-access-5z77f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.031282 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.031377 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.035840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.036172 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.050299 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z77f\" (UniqueName: \"kubernetes.io/projected/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-kube-api-access-5z77f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4xnhm\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.060581 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bfcmv"] Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.070530 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bfcmv"] Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.157073 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.664572 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm"] Nov 28 21:23:17 crc kubenswrapper[4957]: I1128 21:23:17.774499 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" event={"ID":"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f","Type":"ContainerStarted","Data":"d19d5c9145381525a3728c282b01270f71a318039587413f30a92ba0d5de1e4e"} Nov 28 21:23:18 crc kubenswrapper[4957]: I1128 21:23:18.035533 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf4ml"] Nov 28 21:23:18 crc kubenswrapper[4957]: I1128 21:23:18.048681 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf4ml"] Nov 28 21:23:18 crc kubenswrapper[4957]: I1128 21:23:18.786243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" event={"ID":"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f","Type":"ContainerStarted","Data":"6e270b131be3a922d67eaeec49a6303d4b5db7b16119353a76acff70bfcb9dd8"} Nov 28 21:23:18 crc kubenswrapper[4957]: I1128 21:23:18.813489 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" podStartSLOduration=2.414227752 podStartE2EDuration="2.813468145s" podCreationTimestamp="2025-11-28 21:23:16 +0000 UTC" firstStartedPulling="2025-11-28 21:23:17.668783417 +0000 UTC m=+2037.137431326" lastFinishedPulling="2025-11-28 21:23:18.0680238 +0000 UTC m=+2037.536671719" observedRunningTime="2025-11-28 21:23:18.800844733 +0000 UTC m=+2038.269492632" watchObservedRunningTime="2025-11-28 21:23:18.813468145 +0000 UTC m=+2038.282116054" Nov 28 21:23:18 crc kubenswrapper[4957]: I1128 21:23:18.839866 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06fae7da-8394-458c-ad75-2095913be98f" path="/var/lib/kubelet/pods/06fae7da-8394-458c-ad75-2095913be98f/volumes" Nov 28 21:23:18 crc kubenswrapper[4957]: I1128 21:23:18.841198 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8491fd-e8b6-4c13-af22-ab895bf882a4" path="/var/lib/kubelet/pods/5f8491fd-e8b6-4c13-af22-ab895bf882a4/volumes" Nov 28 21:23:19 crc kubenswrapper[4957]: I1128 21:23:19.030688 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-d86d-account-create-update-d66hv"] Nov 28 21:23:19 crc kubenswrapper[4957]: I1128 21:23:19.043304 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-7gskj"] Nov 28 21:23:19 crc kubenswrapper[4957]: I1128 21:23:19.052839 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-d86d-account-create-update-d66hv"] Nov 28 21:23:19 crc kubenswrapper[4957]: I1128 21:23:19.061945 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-7gskj"] Nov 28 21:23:20 crc kubenswrapper[4957]: I1128 21:23:20.826267 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a031112-7352-4144-bcfc-72f292273e61" path="/var/lib/kubelet/pods/9a031112-7352-4144-bcfc-72f292273e61/volumes" Nov 28 21:23:20 crc kubenswrapper[4957]: I1128 21:23:20.827282 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3cf64aa-c4a0-4c2a-ac57-544eefc29d51" path="/var/lib/kubelet/pods/b3cf64aa-c4a0-4c2a-ac57-544eefc29d51/volumes" Nov 28 21:23:28 crc kubenswrapper[4957]: I1128 21:23:28.121480 4957 scope.go:117] "RemoveContainer" containerID="2d3bdeafa902ea5b5dda20833a9bc5656494f5f15558017bdf63d1f6c82b78ef" Nov 28 21:23:28 crc kubenswrapper[4957]: I1128 21:23:28.149070 4957 scope.go:117] "RemoveContainer" containerID="eee54a860d64bb8e3407886beababbd736c5b3052b4dbd4efb012b7d7351ee12" Nov 28 21:23:28 crc kubenswrapper[4957]: I1128 21:23:28.216847 4957 scope.go:117] "RemoveContainer" containerID="9b0f328702a3d648a51e8cd5c4d03439cffb9d35fbb4809a967495077475f005" Nov 28 21:23:28 crc kubenswrapper[4957]: I1128 21:23:28.282675 4957 scope.go:117] "RemoveContainer" containerID="88f9289dddbe4c6d4281716dd48b4177cf577077b23497be302994e7906cfa87" Nov 28 21:23:28 crc kubenswrapper[4957]: I1128 21:23:28.320825 4957 scope.go:117] "RemoveContainer" containerID="202eb05c2f99f5990d7fd61a1dd179c2f8468a1c96937f76da6f4f8d404ff69a" Nov 28 21:24:00 crc kubenswrapper[4957]: I1128 21:24:00.560106 4957 generic.go:334] "Generic (PLEG): container finished" podID="52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" containerID="6e270b131be3a922d67eaeec49a6303d4b5db7b16119353a76acff70bfcb9dd8" exitCode=0 Nov 28 21:24:00 crc kubenswrapper[4957]: I1128 21:24:00.560182 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" event={"ID":"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f","Type":"ContainerDied","Data":"6e270b131be3a922d67eaeec49a6303d4b5db7b16119353a76acff70bfcb9dd8"} Nov 28 21:24:01 crc kubenswrapper[4957]: I1128 21:24:01.037511 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nl79f"] Nov 28 21:24:01 crc kubenswrapper[4957]: I1128 21:24:01.046440 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nl79f"] Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.074252 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.136617 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-ssh-key\") pod \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.136729 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-inventory\") pod \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.136958 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z77f\" (UniqueName: \"kubernetes.io/projected/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-kube-api-access-5z77f\") pod \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\" (UID: \"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f\") " Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.154790 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-kube-api-access-5z77f" (OuterVolumeSpecName: "kube-api-access-5z77f") pod "52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" (UID: "52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f"). InnerVolumeSpecName "kube-api-access-5z77f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.172251 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-inventory" (OuterVolumeSpecName: "inventory") pod "52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" (UID: "52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.173917 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" (UID: "52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.240629 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z77f\" (UniqueName: \"kubernetes.io/projected/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-kube-api-access-5z77f\") on node \"crc\" DevicePath \"\"" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.240666 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.240693 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.592277 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" event={"ID":"52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f","Type":"ContainerDied","Data":"d19d5c9145381525a3728c282b01270f71a318039587413f30a92ba0d5de1e4e"} Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.592318 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d19d5c9145381525a3728c282b01270f71a318039587413f30a92ba0d5de1e4e" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.592419 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4xnhm" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.683510 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml"] Nov 28 21:24:02 crc kubenswrapper[4957]: E1128 21:24:02.684452 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.684469 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.684741 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.685624 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.728619 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.728870 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.729051 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.729207 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.751527 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfps\" (UniqueName: \"kubernetes.io/projected/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-kube-api-access-vrfps\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.751788 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.751840 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.758041 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml"] Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.837369 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d60d5e0-a41b-4fd7-9426-ee0cda73c54e" path="/var/lib/kubelet/pods/8d60d5e0-a41b-4fd7-9426-ee0cda73c54e/volumes" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.853151 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfps\" (UniqueName: \"kubernetes.io/projected/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-kube-api-access-vrfps\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.853349 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.853399 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.862935 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.871470 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:02 crc kubenswrapper[4957]: I1128 21:24:02.907080 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfps\" (UniqueName: \"kubernetes.io/projected/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-kube-api-access-vrfps\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gbxml\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:03 crc kubenswrapper[4957]: I1128 21:24:03.050627 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:24:03 crc kubenswrapper[4957]: I1128 21:24:03.586163 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml"] Nov 28 21:24:03 crc kubenswrapper[4957]: I1128 21:24:03.604613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" event={"ID":"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5","Type":"ContainerStarted","Data":"5bff59b2fe74c2009d701cdcf93aad7d1c4c134e94ea480d6b7f523355f6b1dc"} Nov 28 21:24:04 crc kubenswrapper[4957]: I1128 21:24:04.618198 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" event={"ID":"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5","Type":"ContainerStarted","Data":"e294278459a9394dda55c56948f7804dc83dd3275eae14e1cea897aea95f8959"} Nov 28 21:24:04 crc kubenswrapper[4957]: I1128 21:24:04.638185 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" podStartSLOduration=2.13917746 podStartE2EDuration="2.638148478s" podCreationTimestamp="2025-11-28 21:24:02 +0000 UTC" firstStartedPulling="2025-11-28 21:24:03.571724387 +0000 UTC m=+2083.040372296" lastFinishedPulling="2025-11-28 21:24:04.070695415 +0000 UTC m=+2083.539343314" observedRunningTime="2025-11-28 21:24:04.636711703 +0000 UTC m=+2084.105359612" watchObservedRunningTime="2025-11-28 21:24:04.638148478 +0000 UTC m=+2084.106796387" Nov 28 21:24:28 crc kubenswrapper[4957]: I1128 21:24:28.475027 4957 scope.go:117] "RemoveContainer" containerID="74131dc3e6f3eb96b70d0be5c922c96db1d5a7183fc7fbac2810d8ab6354cb3e" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.061883 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdkwm"] Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.064813 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.077568 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdkwm"] Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.188047 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-utilities\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.188124 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbddr\" (UniqueName: \"kubernetes.io/projected/3d850699-215e-446b-a47c-33d9eadbc704-kube-api-access-jbddr\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.188254 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-catalog-content\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.291536 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-utilities\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.291634 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbddr\" (UniqueName: \"kubernetes.io/projected/3d850699-215e-446b-a47c-33d9eadbc704-kube-api-access-jbddr\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.291729 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-catalog-content\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.292226 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-utilities\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.292303 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-catalog-content\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.322037 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbddr\" (UniqueName: \"kubernetes.io/projected/3d850699-215e-446b-a47c-33d9eadbc704-kube-api-access-jbddr\") pod \"community-operators-gdkwm\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:56 crc kubenswrapper[4957]: I1128 21:24:56.390391 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:24:57 crc kubenswrapper[4957]: I1128 21:24:57.013221 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdkwm"] Nov 28 21:24:57 crc kubenswrapper[4957]: I1128 21:24:57.152489 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerStarted","Data":"52ff8da9760ec753935afd897c2c731c03e01c3caad7e196bb9043e7ca2d10fb"} Nov 28 21:24:58 crc kubenswrapper[4957]: I1128 21:24:58.190344 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d850699-215e-446b-a47c-33d9eadbc704" containerID="3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771" exitCode=0 Nov 28 21:24:58 crc kubenswrapper[4957]: I1128 21:24:58.190423 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerDied","Data":"3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771"} Nov 28 21:24:59 crc kubenswrapper[4957]: I1128 21:24:59.202159 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerStarted","Data":"29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1"} Nov 28 21:25:01 crc kubenswrapper[4957]: I1128 21:25:01.221628 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d850699-215e-446b-a47c-33d9eadbc704" containerID="29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1" exitCode=0 Nov 28 21:25:01 crc kubenswrapper[4957]: I1128 21:25:01.221739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerDied","Data":"29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1"} Nov 28 21:25:02 crc kubenswrapper[4957]: I1128 21:25:02.233506 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerStarted","Data":"8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6"} Nov 28 21:25:02 crc kubenswrapper[4957]: I1128 21:25:02.235673 4957 generic.go:334] "Generic (PLEG): container finished" podID="853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" containerID="e294278459a9394dda55c56948f7804dc83dd3275eae14e1cea897aea95f8959" exitCode=0 Nov 28 21:25:02 crc kubenswrapper[4957]: I1128 21:25:02.235702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" event={"ID":"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5","Type":"ContainerDied","Data":"e294278459a9394dda55c56948f7804dc83dd3275eae14e1cea897aea95f8959"} Nov 28 21:25:02 crc kubenswrapper[4957]: I1128 21:25:02.259403 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdkwm" podStartSLOduration=2.713848216 podStartE2EDuration="6.259384032s" podCreationTimestamp="2025-11-28 21:24:56 +0000 UTC" firstStartedPulling="2025-11-28 21:24:58.193776845 +0000 UTC m=+2137.662424754" lastFinishedPulling="2025-11-28 21:25:01.739312651 +0000 UTC m=+2141.207960570" observedRunningTime="2025-11-28 21:25:02.249314553 +0000 UTC m=+2141.717962462" watchObservedRunningTime="2025-11-28 21:25:02.259384032 +0000 UTC m=+2141.728031941" Nov 28 21:25:03 crc kubenswrapper[4957]: I1128 21:25:03.853791 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.026395 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-inventory\") pod \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.026554 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-ssh-key\") pod \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.026614 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfps\" (UniqueName: \"kubernetes.io/projected/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-kube-api-access-vrfps\") pod \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\" (UID: \"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5\") " Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.049208 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-kube-api-access-vrfps" (OuterVolumeSpecName: "kube-api-access-vrfps") pod "853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" (UID: "853f5e84-3f80-4dd1-99cb-4fb5006f2bf5"). InnerVolumeSpecName "kube-api-access-vrfps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.079744 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" (UID: "853f5e84-3f80-4dd1-99cb-4fb5006f2bf5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.098122 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-inventory" (OuterVolumeSpecName: "inventory") pod "853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" (UID: "853f5e84-3f80-4dd1-99cb-4fb5006f2bf5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.130553 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.130593 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.130606 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfps\" (UniqueName: \"kubernetes.io/projected/853f5e84-3f80-4dd1-99cb-4fb5006f2bf5-kube-api-access-vrfps\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.257983 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" event={"ID":"853f5e84-3f80-4dd1-99cb-4fb5006f2bf5","Type":"ContainerDied","Data":"5bff59b2fe74c2009d701cdcf93aad7d1c4c134e94ea480d6b7f523355f6b1dc"} Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.258024 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bff59b2fe74c2009d701cdcf93aad7d1c4c134e94ea480d6b7f523355f6b1dc" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.258357 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gbxml" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.350466 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-s4mhd"] Nov 28 21:25:04 crc kubenswrapper[4957]: E1128 21:25:04.351253 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.351344 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.351659 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="853f5e84-3f80-4dd1-99cb-4fb5006f2bf5" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.352571 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.357733 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.357759 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.358205 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.358451 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.363884 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-s4mhd"] Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.439949 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.440131 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.440211 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6mt\" (UniqueName: \"kubernetes.io/projected/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-kube-api-access-wq6mt\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.542260 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6mt\" (UniqueName: \"kubernetes.io/projected/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-kube-api-access-wq6mt\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.542430 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.542531 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.548917 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.548978 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.571013 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6mt\" (UniqueName: \"kubernetes.io/projected/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-kube-api-access-wq6mt\") pod \"ssh-known-hosts-edpm-deployment-s4mhd\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:04 crc kubenswrapper[4957]: I1128 21:25:04.670166 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:05 crc kubenswrapper[4957]: I1128 21:25:05.333202 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-s4mhd"] Nov 28 21:25:06 crc kubenswrapper[4957]: I1128 21:25:06.278410 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" event={"ID":"e92f8c7f-4fd3-4ece-963f-3e904a5057bf","Type":"ContainerStarted","Data":"1a0d30703f16f99a8b1387f542a6617e184dd94f2f83039efec1d859fafd0d92"} Nov 28 21:25:06 crc kubenswrapper[4957]: I1128 21:25:06.278458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" event={"ID":"e92f8c7f-4fd3-4ece-963f-3e904a5057bf","Type":"ContainerStarted","Data":"44b57a0fcf5e210dcc1bda5d93ce8b8094a54b0c3165ab05d36e1435cdc16445"} Nov 28 21:25:06 crc kubenswrapper[4957]: I1128 21:25:06.301353 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" podStartSLOduration=1.807133935 podStartE2EDuration="2.301336948s" podCreationTimestamp="2025-11-28 21:25:04 +0000 UTC" firstStartedPulling="2025-11-28 21:25:05.345135562 +0000 UTC m=+2144.813783461" lastFinishedPulling="2025-11-28 21:25:05.839338565 +0000 UTC m=+2145.307986474" observedRunningTime="2025-11-28 21:25:06.289500758 +0000 UTC m=+2145.758148667" watchObservedRunningTime="2025-11-28 21:25:06.301336948 +0000 UTC m=+2145.769984857" Nov 28 21:25:06 crc kubenswrapper[4957]: I1128 21:25:06.392091 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:25:06 crc kubenswrapper[4957]: I1128 21:25:06.392153 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:25:06 crc kubenswrapper[4957]: I1128 21:25:06.442740 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:25:07 crc kubenswrapper[4957]: I1128 21:25:07.350202 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:25:07 crc kubenswrapper[4957]: I1128 21:25:07.412973 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdkwm"] Nov 28 21:25:08 crc kubenswrapper[4957]: I1128 21:25:08.992589 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:25:08 crc kubenswrapper[4957]: I1128 21:25:08.992928 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.309254 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gdkwm" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="registry-server" containerID="cri-o://8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6" gracePeriod=2 Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.891581 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.973516 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-utilities\") pod \"3d850699-215e-446b-a47c-33d9eadbc704\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.973668 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbddr\" (UniqueName: \"kubernetes.io/projected/3d850699-215e-446b-a47c-33d9eadbc704-kube-api-access-jbddr\") pod \"3d850699-215e-446b-a47c-33d9eadbc704\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.973855 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-catalog-content\") pod \"3d850699-215e-446b-a47c-33d9eadbc704\" (UID: \"3d850699-215e-446b-a47c-33d9eadbc704\") " Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.974478 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-utilities" (OuterVolumeSpecName: "utilities") pod "3d850699-215e-446b-a47c-33d9eadbc704" (UID: "3d850699-215e-446b-a47c-33d9eadbc704"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:25:09 crc kubenswrapper[4957]: I1128 21:25:09.979494 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d850699-215e-446b-a47c-33d9eadbc704-kube-api-access-jbddr" (OuterVolumeSpecName: "kube-api-access-jbddr") pod "3d850699-215e-446b-a47c-33d9eadbc704" (UID: "3d850699-215e-446b-a47c-33d9eadbc704"). InnerVolumeSpecName "kube-api-access-jbddr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.026769 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d850699-215e-446b-a47c-33d9eadbc704" (UID: "3d850699-215e-446b-a47c-33d9eadbc704"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.077011 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.077042 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d850699-215e-446b-a47c-33d9eadbc704-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.077051 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbddr\" (UniqueName: \"kubernetes.io/projected/3d850699-215e-446b-a47c-33d9eadbc704-kube-api-access-jbddr\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.322093 4957 generic.go:334] "Generic (PLEG): container finished" podID="3d850699-215e-446b-a47c-33d9eadbc704" containerID="8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6" exitCode=0 Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.322172 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerDied","Data":"8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6"} Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.322202 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdkwm" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.323010 4957 scope.go:117] "RemoveContainer" containerID="8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.322933 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdkwm" event={"ID":"3d850699-215e-446b-a47c-33d9eadbc704","Type":"ContainerDied","Data":"52ff8da9760ec753935afd897c2c731c03e01c3caad7e196bb9043e7ca2d10fb"} Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.349397 4957 scope.go:117] "RemoveContainer" containerID="29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.362583 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdkwm"] Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.372869 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gdkwm"] Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.396276 4957 scope.go:117] "RemoveContainer" containerID="3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.443313 4957 scope.go:117] "RemoveContainer" containerID="8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6" Nov 28 21:25:10 crc kubenswrapper[4957]: E1128 21:25:10.443808 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6\": container with ID starting with 8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6 not found: ID does not exist" containerID="8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.443850 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6"} err="failed to get container status \"8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6\": rpc error: code = NotFound desc = could not find container \"8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6\": container with ID starting with 8c9b4121ec5fe2c76b30fbfad74335b2980c74da4f54ddc9097e146dae6b99d6 not found: ID does not exist" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.443881 4957 scope.go:117] "RemoveContainer" containerID="29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1" Nov 28 21:25:10 crc kubenswrapper[4957]: E1128 21:25:10.444375 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1\": container with ID starting with 29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1 not found: ID does not exist" containerID="29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.444406 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1"} err="failed to get container status \"29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1\": rpc error: code = NotFound desc = could not find container \"29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1\": container with ID starting with 29c5913f13830c7e208ff2b99936191d7a73dda74807c76eca4784bc7de7d6e1 not found: ID does not exist" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.444431 4957 scope.go:117] "RemoveContainer" containerID="3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771" Nov 28 21:25:10 crc kubenswrapper[4957]: E1128 21:25:10.444737 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771\": container with ID starting with 3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771 not found: ID does not exist" containerID="3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.444766 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771"} err="failed to get container status \"3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771\": rpc error: code = NotFound desc = could not find container \"3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771\": container with ID starting with 3460392c2a6a3763dc9f564e3b7bad78f4919d631ead7841efd91b6d5f5bb771 not found: ID does not exist" Nov 28 21:25:10 crc kubenswrapper[4957]: I1128 21:25:10.825477 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d850699-215e-446b-a47c-33d9eadbc704" path="/var/lib/kubelet/pods/3d850699-215e-446b-a47c-33d9eadbc704/volumes" Nov 28 21:25:13 crc kubenswrapper[4957]: I1128 21:25:13.351931 4957 generic.go:334] "Generic (PLEG): container finished" podID="e92f8c7f-4fd3-4ece-963f-3e904a5057bf" containerID="1a0d30703f16f99a8b1387f542a6617e184dd94f2f83039efec1d859fafd0d92" exitCode=0 Nov 28 21:25:13 crc kubenswrapper[4957]: I1128 21:25:13.352108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" event={"ID":"e92f8c7f-4fd3-4ece-963f-3e904a5057bf","Type":"ContainerDied","Data":"1a0d30703f16f99a8b1387f542a6617e184dd94f2f83039efec1d859fafd0d92"} Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.809632 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.894662 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq6mt\" (UniqueName: \"kubernetes.io/projected/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-kube-api-access-wq6mt\") pod \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.894811 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-inventory-0\") pod \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.894839 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-ssh-key-openstack-edpm-ipam\") pod \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\" (UID: \"e92f8c7f-4fd3-4ece-963f-3e904a5057bf\") " Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.905558 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-kube-api-access-wq6mt" (OuterVolumeSpecName: "kube-api-access-wq6mt") pod "e92f8c7f-4fd3-4ece-963f-3e904a5057bf" (UID: "e92f8c7f-4fd3-4ece-963f-3e904a5057bf"). InnerVolumeSpecName "kube-api-access-wq6mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.934513 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e92f8c7f-4fd3-4ece-963f-3e904a5057bf" (UID: "e92f8c7f-4fd3-4ece-963f-3e904a5057bf"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.939556 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e92f8c7f-4fd3-4ece-963f-3e904a5057bf" (UID: "e92f8c7f-4fd3-4ece-963f-3e904a5057bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.998624 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq6mt\" (UniqueName: \"kubernetes.io/projected/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-kube-api-access-wq6mt\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.998773 4957 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:14 crc kubenswrapper[4957]: I1128 21:25:14.998845 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e92f8c7f-4fd3-4ece-963f-3e904a5057bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.377672 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" event={"ID":"e92f8c7f-4fd3-4ece-963f-3e904a5057bf","Type":"ContainerDied","Data":"44b57a0fcf5e210dcc1bda5d93ce8b8094a54b0c3165ab05d36e1435cdc16445"} Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.377718 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b57a0fcf5e210dcc1bda5d93ce8b8094a54b0c3165ab05d36e1435cdc16445" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.377779 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-s4mhd" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.471728 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6"] Nov 28 21:25:15 crc kubenswrapper[4957]: E1128 21:25:15.472536 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e92f8c7f-4fd3-4ece-963f-3e904a5057bf" containerName="ssh-known-hosts-edpm-deployment" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.472566 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e92f8c7f-4fd3-4ece-963f-3e904a5057bf" containerName="ssh-known-hosts-edpm-deployment" Nov 28 21:25:15 crc kubenswrapper[4957]: E1128 21:25:15.472588 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="registry-server" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.472603 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="registry-server" Nov 28 21:25:15 crc kubenswrapper[4957]: E1128 21:25:15.472643 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="extract-content" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.472656 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="extract-content" Nov 28 21:25:15 crc kubenswrapper[4957]: E1128 21:25:15.472692 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="extract-utilities" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.472705 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="extract-utilities" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.473120 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d850699-215e-446b-a47c-33d9eadbc704" containerName="registry-server" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.473158 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92f8c7f-4fd3-4ece-963f-3e904a5057bf" containerName="ssh-known-hosts-edpm-deployment" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.474572 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.483939 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.510573 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6"] Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.513559 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qxp\" (UniqueName: \"kubernetes.io/projected/517e3d64-b818-4eea-a010-1237b735c5e2-kube-api-access-99qxp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.513631 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.513768 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.520643 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.520856 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.520856 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.616444 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qxp\" (UniqueName: \"kubernetes.io/projected/517e3d64-b818-4eea-a010-1237b735c5e2-kube-api-access-99qxp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.616762 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.616954 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.622787 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.629373 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.631792 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qxp\" (UniqueName: \"kubernetes.io/projected/517e3d64-b818-4eea-a010-1237b735c5e2-kube-api-access-99qxp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lq4r6\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:15 crc kubenswrapper[4957]: I1128 21:25:15.807435 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:16 crc kubenswrapper[4957]: I1128 21:25:16.393866 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6"] Nov 28 21:25:17 crc kubenswrapper[4957]: I1128 21:25:17.404752 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" event={"ID":"517e3d64-b818-4eea-a010-1237b735c5e2","Type":"ContainerStarted","Data":"a1e3230252d3cee444441c0017c6a57de0f683c4530ad65a0304ab600498c100"} Nov 28 21:25:17 crc kubenswrapper[4957]: I1128 21:25:17.405324 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" event={"ID":"517e3d64-b818-4eea-a010-1237b735c5e2","Type":"ContainerStarted","Data":"ca8a4a69dd6cc666eb310e0aab34206e27c168d285bc3efbdc728ed712a3546b"} Nov 28 21:25:17 crc kubenswrapper[4957]: I1128 21:25:17.447875 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" podStartSLOduration=1.924067755 podStartE2EDuration="2.447856302s" podCreationTimestamp="2025-11-28 21:25:15 +0000 UTC" firstStartedPulling="2025-11-28 21:25:16.398512185 +0000 UTC m=+2155.867160094" lastFinishedPulling="2025-11-28 21:25:16.922300732 +0000 UTC m=+2156.390948641" observedRunningTime="2025-11-28 21:25:17.447664057 +0000 UTC m=+2156.916311966" watchObservedRunningTime="2025-11-28 21:25:17.447856302 +0000 UTC m=+2156.916504211" Nov 28 21:25:24 crc kubenswrapper[4957]: I1128 21:25:24.050812 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-j52q4"] Nov 28 21:25:24 crc kubenswrapper[4957]: I1128 21:25:24.064610 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-j52q4"] Nov 28 21:25:24 crc kubenswrapper[4957]: I1128 21:25:24.828160 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3d0cd5-b463-4340-9c00-7d226bec612a" path="/var/lib/kubelet/pods/6f3d0cd5-b463-4340-9c00-7d226bec612a/volumes" Nov 28 21:25:26 crc kubenswrapper[4957]: I1128 21:25:26.499022 4957 generic.go:334] "Generic (PLEG): container finished" podID="517e3d64-b818-4eea-a010-1237b735c5e2" containerID="a1e3230252d3cee444441c0017c6a57de0f683c4530ad65a0304ab600498c100" exitCode=0 Nov 28 21:25:26 crc kubenswrapper[4957]: I1128 21:25:26.499116 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" event={"ID":"517e3d64-b818-4eea-a010-1237b735c5e2","Type":"ContainerDied","Data":"a1e3230252d3cee444441c0017c6a57de0f683c4530ad65a0304ab600498c100"} Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.019597 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.210102 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-inventory\") pod \"517e3d64-b818-4eea-a010-1237b735c5e2\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.210354 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-ssh-key\") pod \"517e3d64-b818-4eea-a010-1237b735c5e2\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.210450 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99qxp\" (UniqueName: \"kubernetes.io/projected/517e3d64-b818-4eea-a010-1237b735c5e2-kube-api-access-99qxp\") pod \"517e3d64-b818-4eea-a010-1237b735c5e2\" (UID: \"517e3d64-b818-4eea-a010-1237b735c5e2\") " Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.218172 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517e3d64-b818-4eea-a010-1237b735c5e2-kube-api-access-99qxp" (OuterVolumeSpecName: "kube-api-access-99qxp") pod "517e3d64-b818-4eea-a010-1237b735c5e2" (UID: "517e3d64-b818-4eea-a010-1237b735c5e2"). InnerVolumeSpecName "kube-api-access-99qxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.244296 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-inventory" (OuterVolumeSpecName: "inventory") pod "517e3d64-b818-4eea-a010-1237b735c5e2" (UID: "517e3d64-b818-4eea-a010-1237b735c5e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.245833 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "517e3d64-b818-4eea-a010-1237b735c5e2" (UID: "517e3d64-b818-4eea-a010-1237b735c5e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.312753 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.312983 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/517e3d64-b818-4eea-a010-1237b735c5e2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.313055 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99qxp\" (UniqueName: \"kubernetes.io/projected/517e3d64-b818-4eea-a010-1237b735c5e2-kube-api-access-99qxp\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.555495 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" event={"ID":"517e3d64-b818-4eea-a010-1237b735c5e2","Type":"ContainerDied","Data":"ca8a4a69dd6cc666eb310e0aab34206e27c168d285bc3efbdc728ed712a3546b"} Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.555534 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca8a4a69dd6cc666eb310e0aab34206e27c168d285bc3efbdc728ed712a3546b" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.555604 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lq4r6" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.593876 4957 scope.go:117] "RemoveContainer" containerID="35e7822cf2ce6d67f867998ddf88931f7fdbde0393cab7975785b4f88a986e90" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.639687 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9"] Nov 28 21:25:28 crc kubenswrapper[4957]: E1128 21:25:28.640136 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517e3d64-b818-4eea-a010-1237b735c5e2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.640151 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="517e3d64-b818-4eea-a010-1237b735c5e2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.640390 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="517e3d64-b818-4eea-a010-1237b735c5e2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.641197 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.647427 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.647715 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.647837 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.649332 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.649672 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.649811 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjmx\" (UniqueName: \"kubernetes.io/projected/be3139dd-9ebc-4678-abba-2217f17f76c1-kube-api-access-8bjmx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.649837 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.656323 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9"] Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.752018 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bjmx\" (UniqueName: \"kubernetes.io/projected/be3139dd-9ebc-4678-abba-2217f17f76c1-kube-api-access-8bjmx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.752069 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.752162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.756930 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.757640 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:28 crc kubenswrapper[4957]: I1128 21:25:28.767662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bjmx\" (UniqueName: \"kubernetes.io/projected/be3139dd-9ebc-4678-abba-2217f17f76c1-kube-api-access-8bjmx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:29 crc kubenswrapper[4957]: I1128 21:25:29.007871 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:29 crc kubenswrapper[4957]: I1128 21:25:29.622926 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9"] Nov 28 21:25:29 crc kubenswrapper[4957]: W1128 21:25:29.627418 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe3139dd_9ebc_4678_abba_2217f17f76c1.slice/crio-645868e312f0ee35481dc79bc2be0fdb42399e0940f05568462b7e5de284ea04 WatchSource:0}: Error finding container 645868e312f0ee35481dc79bc2be0fdb42399e0940f05568462b7e5de284ea04: Status 404 returned error can't find the container with id 645868e312f0ee35481dc79bc2be0fdb42399e0940f05568462b7e5de284ea04 Nov 28 21:25:30 crc kubenswrapper[4957]: I1128 21:25:30.579809 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" event={"ID":"be3139dd-9ebc-4678-abba-2217f17f76c1","Type":"ContainerStarted","Data":"f6e2dd44d10bf493649b1e00a434f6b372b8b9de2c6f9c6cb67ac56079a50e8a"} Nov 28 21:25:30 crc kubenswrapper[4957]: I1128 21:25:30.580163 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" event={"ID":"be3139dd-9ebc-4678-abba-2217f17f76c1","Type":"ContainerStarted","Data":"645868e312f0ee35481dc79bc2be0fdb42399e0940f05568462b7e5de284ea04"} Nov 28 21:25:30 crc kubenswrapper[4957]: I1128 21:25:30.603703 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" podStartSLOduration=2.155053086 podStartE2EDuration="2.603678841s" podCreationTimestamp="2025-11-28 21:25:28 +0000 UTC" firstStartedPulling="2025-11-28 21:25:29.641571591 +0000 UTC m=+2169.110219500" lastFinishedPulling="2025-11-28 21:25:30.090197346 +0000 UTC m=+2169.558845255" observedRunningTime="2025-11-28 21:25:30.596663049 +0000 UTC m=+2170.065310958" watchObservedRunningTime="2025-11-28 21:25:30.603678841 +0000 UTC m=+2170.072326750" Nov 28 21:25:38 crc kubenswrapper[4957]: I1128 21:25:38.992914 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:25:38 crc kubenswrapper[4957]: I1128 21:25:38.993223 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:25:41 crc kubenswrapper[4957]: I1128 21:25:41.700705 4957 generic.go:334] "Generic (PLEG): container finished" podID="be3139dd-9ebc-4678-abba-2217f17f76c1" containerID="f6e2dd44d10bf493649b1e00a434f6b372b8b9de2c6f9c6cb67ac56079a50e8a" exitCode=0 Nov 28 21:25:41 crc kubenswrapper[4957]: I1128 21:25:41.700792 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" event={"ID":"be3139dd-9ebc-4678-abba-2217f17f76c1","Type":"ContainerDied","Data":"f6e2dd44d10bf493649b1e00a434f6b372b8b9de2c6f9c6cb67ac56079a50e8a"} Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.269765 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.449099 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-ssh-key\") pod \"be3139dd-9ebc-4678-abba-2217f17f76c1\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.449416 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bjmx\" (UniqueName: \"kubernetes.io/projected/be3139dd-9ebc-4678-abba-2217f17f76c1-kube-api-access-8bjmx\") pod \"be3139dd-9ebc-4678-abba-2217f17f76c1\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.449471 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-inventory\") pod \"be3139dd-9ebc-4678-abba-2217f17f76c1\" (UID: \"be3139dd-9ebc-4678-abba-2217f17f76c1\") " Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.457688 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3139dd-9ebc-4678-abba-2217f17f76c1-kube-api-access-8bjmx" (OuterVolumeSpecName: "kube-api-access-8bjmx") pod "be3139dd-9ebc-4678-abba-2217f17f76c1" (UID: "be3139dd-9ebc-4678-abba-2217f17f76c1"). InnerVolumeSpecName "kube-api-access-8bjmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.489357 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-inventory" (OuterVolumeSpecName: "inventory") pod "be3139dd-9ebc-4678-abba-2217f17f76c1" (UID: "be3139dd-9ebc-4678-abba-2217f17f76c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.490171 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be3139dd-9ebc-4678-abba-2217f17f76c1" (UID: "be3139dd-9ebc-4678-abba-2217f17f76c1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.551972 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bjmx\" (UniqueName: \"kubernetes.io/projected/be3139dd-9ebc-4678-abba-2217f17f76c1-kube-api-access-8bjmx\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.552004 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.552013 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be3139dd-9ebc-4678-abba-2217f17f76c1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.721733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" event={"ID":"be3139dd-9ebc-4678-abba-2217f17f76c1","Type":"ContainerDied","Data":"645868e312f0ee35481dc79bc2be0fdb42399e0940f05568462b7e5de284ea04"} Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.721779 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645868e312f0ee35481dc79bc2be0fdb42399e0940f05568462b7e5de284ea04" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.721841 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.832896 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd"] Nov 28 21:25:43 crc kubenswrapper[4957]: E1128 21:25:43.833442 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3139dd-9ebc-4678-abba-2217f17f76c1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.833459 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3139dd-9ebc-4678-abba-2217f17f76c1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.833747 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3139dd-9ebc-4678-abba-2217f17f76c1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.834560 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.838390 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.838811 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.838948 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.839066 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.839279 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.839307 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.839454 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.839530 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.839617 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.865600 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd"] Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.867735 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.867779 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.867843 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868005 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868104 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868227 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868312 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868468 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868589 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868692 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868800 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868882 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.868969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.869040 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.869139 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkc58\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-kube-api-access-zkc58\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.869337 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.971723 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.971798 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.971833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.971888 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.971983 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.971999 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972019 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972038 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972088 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972105 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972128 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972153 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972174 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972195 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972225 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:43 crc kubenswrapper[4957]: I1128 21:25:43.972256 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkc58\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-kube-api-access-zkc58\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.001667 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.010581 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.010660 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.011079 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.011172 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.019786 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.019893 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.020007 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.022924 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.023551 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.025725 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.027888 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.038906 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.043988 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkc58\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-kube-api-access-zkc58\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.044638 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.044905 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.156173 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.670561 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd"] Nov 28 21:25:44 crc kubenswrapper[4957]: I1128 21:25:44.732642 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" event={"ID":"644a4348-cc60-4801-a899-27ba6238dcd1","Type":"ContainerStarted","Data":"e3e371cb85c1511a720c91df1cdeef2b4c5f60b16eec786d7b604f4bf448dc44"} Nov 28 21:25:45 crc kubenswrapper[4957]: I1128 21:25:45.745714 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" event={"ID":"644a4348-cc60-4801-a899-27ba6238dcd1","Type":"ContainerStarted","Data":"906b9e0feabf6c80e5888aa3da20964c30f847b9582252eaf5a11671590cfef2"} Nov 28 21:25:45 crc kubenswrapper[4957]: I1128 21:25:45.776396 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" podStartSLOduration=2.296583121 podStartE2EDuration="2.77637666s" podCreationTimestamp="2025-11-28 21:25:43 +0000 UTC" firstStartedPulling="2025-11-28 21:25:44.678434932 +0000 UTC m=+2184.147082841" lastFinishedPulling="2025-11-28 21:25:45.158228481 +0000 UTC m=+2184.626876380" observedRunningTime="2025-11-28 21:25:45.764875978 +0000 UTC m=+2185.233523907" watchObservedRunningTime="2025-11-28 21:25:45.77637666 +0000 UTC m=+2185.245024569" Nov 28 21:25:54 crc kubenswrapper[4957]: I1128 21:25:54.044648 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-jm9wx"] Nov 28 21:25:54 crc kubenswrapper[4957]: I1128 21:25:54.057889 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-jm9wx"] Nov 28 21:25:54 crc kubenswrapper[4957]: I1128 21:25:54.830065 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5a6783-443c-4aa3-8985-2476a17d6f48" path="/var/lib/kubelet/pods/8e5a6783-443c-4aa3-8985-2476a17d6f48/volumes" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.754309 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-skzbs"] Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.757010 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.775322 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skzbs"] Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.810304 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-utilities\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.810509 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgjv\" (UniqueName: \"kubernetes.io/projected/c91f74bb-bb28-48e4-bf4b-81165cc1193d-kube-api-access-7bgjv\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.810583 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-catalog-content\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.913743 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-utilities\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.914270 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgjv\" (UniqueName: \"kubernetes.io/projected/c91f74bb-bb28-48e4-bf4b-81165cc1193d-kube-api-access-7bgjv\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.914339 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-utilities\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.914443 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-catalog-content\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.914909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-catalog-content\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:25:59 crc kubenswrapper[4957]: I1128 21:25:59.943258 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgjv\" (UniqueName: \"kubernetes.io/projected/c91f74bb-bb28-48e4-bf4b-81165cc1193d-kube-api-access-7bgjv\") pod \"redhat-marketplace-skzbs\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:00 crc kubenswrapper[4957]: I1128 21:26:00.090559 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:00 crc kubenswrapper[4957]: I1128 21:26:00.605761 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skzbs"] Nov 28 21:26:00 crc kubenswrapper[4957]: W1128 21:26:00.613376 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc91f74bb_bb28_48e4_bf4b_81165cc1193d.slice/crio-28f7542480990ed2715ef476a066cbd033a696dfacd72592f74a36f302e7e327 WatchSource:0}: Error finding container 28f7542480990ed2715ef476a066cbd033a696dfacd72592f74a36f302e7e327: Status 404 returned error can't find the container with id 28f7542480990ed2715ef476a066cbd033a696dfacd72592f74a36f302e7e327 Nov 28 21:26:00 crc kubenswrapper[4957]: I1128 21:26:00.914064 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerStarted","Data":"2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875"} Nov 28 21:26:00 crc kubenswrapper[4957]: I1128 21:26:00.914107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerStarted","Data":"28f7542480990ed2715ef476a066cbd033a696dfacd72592f74a36f302e7e327"} Nov 28 21:26:01 crc kubenswrapper[4957]: I1128 21:26:01.926363 4957 generic.go:334] "Generic (PLEG): container finished" podID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerID="2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875" exitCode=0 Nov 28 21:26:01 crc kubenswrapper[4957]: I1128 21:26:01.926677 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerDied","Data":"2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875"} Nov 28 21:26:01 crc kubenswrapper[4957]: I1128 21:26:01.926702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerStarted","Data":"c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd"} Nov 28 21:26:02 crc kubenswrapper[4957]: I1128 21:26:02.954477 4957 generic.go:334] "Generic (PLEG): container finished" podID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerID="c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd" exitCode=0 Nov 28 21:26:02 crc kubenswrapper[4957]: I1128 21:26:02.954834 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerDied","Data":"c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd"} Nov 28 21:26:03 crc kubenswrapper[4957]: I1128 21:26:03.971305 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerStarted","Data":"8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687"} Nov 28 21:26:08 crc kubenswrapper[4957]: I1128 21:26:08.992641 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:26:08 crc kubenswrapper[4957]: I1128 21:26:08.995153 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:26:08 crc kubenswrapper[4957]: I1128 21:26:08.995488 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:26:08 crc kubenswrapper[4957]: I1128 21:26:08.996818 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3001db91bc32721628253d33092eb370a1a801675f87f706e745629d542dcc7"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:26:08 crc kubenswrapper[4957]: I1128 21:26:08.997150 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://b3001db91bc32721628253d33092eb370a1a801675f87f706e745629d542dcc7" gracePeriod=600 Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.037072 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="b3001db91bc32721628253d33092eb370a1a801675f87f706e745629d542dcc7" exitCode=0 Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.037160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"b3001db91bc32721628253d33092eb370a1a801675f87f706e745629d542dcc7"} Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.037719 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d"} Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.037743 4957 scope.go:117] "RemoveContainer" containerID="4cd7324e900f977fbbcc8462025b160e869cf5db5539d2a749426c0a466a83c5" Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.066938 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-skzbs" podStartSLOduration=8.53375382 podStartE2EDuration="11.066914554s" podCreationTimestamp="2025-11-28 21:25:59 +0000 UTC" firstStartedPulling="2025-11-28 21:26:00.918410858 +0000 UTC m=+2200.387058767" lastFinishedPulling="2025-11-28 21:26:03.451571582 +0000 UTC m=+2202.920219501" observedRunningTime="2025-11-28 21:26:03.995723297 +0000 UTC m=+2203.464371246" watchObservedRunningTime="2025-11-28 21:26:10.066914554 +0000 UTC m=+2209.535562463" Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.091472 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.091513 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:10 crc kubenswrapper[4957]: I1128 21:26:10.137745 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:11 crc kubenswrapper[4957]: I1128 21:26:11.129999 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:11 crc kubenswrapper[4957]: I1128 21:26:11.212398 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skzbs"] Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.078225 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-skzbs" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="registry-server" containerID="cri-o://8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687" gracePeriod=2 Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.614360 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.788721 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-utilities\") pod \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.788951 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-catalog-content\") pod \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.788993 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgjv\" (UniqueName: \"kubernetes.io/projected/c91f74bb-bb28-48e4-bf4b-81165cc1193d-kube-api-access-7bgjv\") pod \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\" (UID: \"c91f74bb-bb28-48e4-bf4b-81165cc1193d\") " Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.789874 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-utilities" (OuterVolumeSpecName: "utilities") pod "c91f74bb-bb28-48e4-bf4b-81165cc1193d" (UID: "c91f74bb-bb28-48e4-bf4b-81165cc1193d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.803758 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91f74bb-bb28-48e4-bf4b-81165cc1193d-kube-api-access-7bgjv" (OuterVolumeSpecName: "kube-api-access-7bgjv") pod "c91f74bb-bb28-48e4-bf4b-81165cc1193d" (UID: "c91f74bb-bb28-48e4-bf4b-81165cc1193d"). InnerVolumeSpecName "kube-api-access-7bgjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.806828 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c91f74bb-bb28-48e4-bf4b-81165cc1193d" (UID: "c91f74bb-bb28-48e4-bf4b-81165cc1193d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.892235 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.892276 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgjv\" (UniqueName: \"kubernetes.io/projected/c91f74bb-bb28-48e4-bf4b-81165cc1193d-kube-api-access-7bgjv\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:13 crc kubenswrapper[4957]: I1128 21:26:13.892291 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c91f74bb-bb28-48e4-bf4b-81165cc1193d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.089440 4957 generic.go:334] "Generic (PLEG): container finished" podID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerID="8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687" exitCode=0 Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.089522 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skzbs" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.089532 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerDied","Data":"8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687"} Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.090652 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skzbs" event={"ID":"c91f74bb-bb28-48e4-bf4b-81165cc1193d","Type":"ContainerDied","Data":"28f7542480990ed2715ef476a066cbd033a696dfacd72592f74a36f302e7e327"} Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.090681 4957 scope.go:117] "RemoveContainer" containerID="8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.143654 4957 scope.go:117] "RemoveContainer" containerID="c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.144894 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skzbs"] Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.160467 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-skzbs"] Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.172754 4957 scope.go:117] "RemoveContainer" containerID="2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.229738 4957 scope.go:117] "RemoveContainer" containerID="8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687" Nov 28 21:26:14 crc kubenswrapper[4957]: E1128 21:26:14.230385 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687\": container with ID starting with 8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687 not found: ID does not exist" containerID="8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.230503 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687"} err="failed to get container status \"8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687\": rpc error: code = NotFound desc = could not find container \"8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687\": container with ID starting with 8058db04c825d8d809c00ad1284bb77d28b0b377ddc75746e7b02245d8446687 not found: ID does not exist" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.230627 4957 scope.go:117] "RemoveContainer" containerID="c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd" Nov 28 21:26:14 crc kubenswrapper[4957]: E1128 21:26:14.231033 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd\": container with ID starting with c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd not found: ID does not exist" containerID="c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.231122 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd"} err="failed to get container status \"c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd\": rpc error: code = NotFound desc = could not find container \"c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd\": container with ID starting with c15f44f15595665ac635b05d150b1a7d4875ce3855dd8ceb08c27e585d57fbcd not found: ID does not exist" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.231204 4957 scope.go:117] "RemoveContainer" containerID="2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875" Nov 28 21:26:14 crc kubenswrapper[4957]: E1128 21:26:14.231489 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875\": container with ID starting with 2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875 not found: ID does not exist" containerID="2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.231725 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875"} err="failed to get container status \"2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875\": rpc error: code = NotFound desc = could not find container \"2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875\": container with ID starting with 2308cad8d86bc7f363dbbe75648e79cc4228fc775979557efd7ec10ae5f37875 not found: ID does not exist" Nov 28 21:26:14 crc kubenswrapper[4957]: I1128 21:26:14.835310 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" path="/var/lib/kubelet/pods/c91f74bb-bb28-48e4-bf4b-81165cc1193d/volumes" Nov 28 21:26:28 crc kubenswrapper[4957]: I1128 21:26:28.710125 4957 scope.go:117] "RemoveContainer" containerID="ea2ccdf1284a7aa7d8981ca6f9b75fd4358d74e94dfc58ce135d4c6d08232b24" Nov 28 21:26:38 crc kubenswrapper[4957]: I1128 21:26:38.538580 4957 generic.go:334] "Generic (PLEG): container finished" podID="644a4348-cc60-4801-a899-27ba6238dcd1" containerID="906b9e0feabf6c80e5888aa3da20964c30f847b9582252eaf5a11671590cfef2" exitCode=0 Nov 28 21:26:38 crc kubenswrapper[4957]: I1128 21:26:38.538642 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" event={"ID":"644a4348-cc60-4801-a899-27ba6238dcd1","Type":"ContainerDied","Data":"906b9e0feabf6c80e5888aa3da20964c30f847b9582252eaf5a11671590cfef2"} Nov 28 21:26:39 crc kubenswrapper[4957]: I1128 21:26:39.980968 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021035 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-bootstrap-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021079 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021133 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-repo-setup-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021202 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-neutron-metadata-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021245 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-power-monitoring-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021299 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021385 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021404 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021482 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-libvirt-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021522 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021558 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-inventory\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021591 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-nova-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021613 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021635 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ovn-combined-ca-bundle\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021664 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkc58\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-kube-api-access-zkc58\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.021719 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ssh-key\") pod \"644a4348-cc60-4801-a899-27ba6238dcd1\" (UID: \"644a4348-cc60-4801-a899-27ba6238dcd1\") " Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.028908 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.029371 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.030174 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.031018 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035354 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035548 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035558 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035592 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-kube-api-access-zkc58" (OuterVolumeSpecName: "kube-api-access-zkc58") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "kube-api-access-zkc58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035594 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035846 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.035899 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.036071 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.037363 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.040459 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.065252 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.068802 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-inventory" (OuterVolumeSpecName: "inventory") pod "644a4348-cc60-4801-a899-27ba6238dcd1" (UID: "644a4348-cc60-4801-a899-27ba6238dcd1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125189 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkc58\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-kube-api-access-zkc58\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125246 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125261 4957 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125277 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125290 4957 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125302 4957 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125314 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125330 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125342 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125353 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125364 4957 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125375 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125388 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125400 4957 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125415 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/644a4348-cc60-4801-a899-27ba6238dcd1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.125429 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644a4348-cc60-4801-a899-27ba6238dcd1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.566715 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" event={"ID":"644a4348-cc60-4801-a899-27ba6238dcd1","Type":"ContainerDied","Data":"e3e371cb85c1511a720c91df1cdeef2b4c5f60b16eec786d7b604f4bf448dc44"} Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.566778 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e371cb85c1511a720c91df1cdeef2b4c5f60b16eec786d7b604f4bf448dc44" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.566800 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.684851 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg"] Nov 28 21:26:40 crc kubenswrapper[4957]: E1128 21:26:40.685368 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="extract-utilities" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.685381 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="extract-utilities" Nov 28 21:26:40 crc kubenswrapper[4957]: E1128 21:26:40.685401 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a4348-cc60-4801-a899-27ba6238dcd1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.685410 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a4348-cc60-4801-a899-27ba6238dcd1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 21:26:40 crc kubenswrapper[4957]: E1128 21:26:40.685432 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="extract-content" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.685439 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="extract-content" Nov 28 21:26:40 crc kubenswrapper[4957]: E1128 21:26:40.685454 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="registry-server" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.685460 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="registry-server" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.685661 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="644a4348-cc60-4801-a899-27ba6238dcd1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.685691 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91f74bb-bb28-48e4-bf4b-81165cc1193d" containerName="registry-server" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.686495 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.689273 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.691201 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.691536 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.691789 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.692003 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.733005 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg"] Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.736572 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.736650 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7pp\" (UniqueName: \"kubernetes.io/projected/9938b0a7-21ab-4bb0-b689-6004bce90534-kube-api-access-2g7pp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.736681 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9938b0a7-21ab-4bb0-b689-6004bce90534-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.736712 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.736856 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.838527 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.838691 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.838743 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g7pp\" (UniqueName: \"kubernetes.io/projected/9938b0a7-21ab-4bb0-b689-6004bce90534-kube-api-access-2g7pp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.838772 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9938b0a7-21ab-4bb0-b689-6004bce90534-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.838801 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.840665 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9938b0a7-21ab-4bb0-b689-6004bce90534-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.843894 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.845725 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.852595 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:40 crc kubenswrapper[4957]: I1128 21:26:40.868601 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g7pp\" (UniqueName: \"kubernetes.io/projected/9938b0a7-21ab-4bb0-b689-6004bce90534-kube-api-access-2g7pp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5k6gg\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:41 crc kubenswrapper[4957]: I1128 21:26:41.038775 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:26:41 crc kubenswrapper[4957]: I1128 21:26:41.572818 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg"] Nov 28 21:26:41 crc kubenswrapper[4957]: I1128 21:26:41.573275 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:26:41 crc kubenswrapper[4957]: I1128 21:26:41.584922 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" event={"ID":"9938b0a7-21ab-4bb0-b689-6004bce90534","Type":"ContainerStarted","Data":"50b868c2de006c90720eabafd080843fc78e2397c7cef8a8bcedf37d5435b6b0"} Nov 28 21:26:42 crc kubenswrapper[4957]: I1128 21:26:42.620005 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" event={"ID":"9938b0a7-21ab-4bb0-b689-6004bce90534","Type":"ContainerStarted","Data":"8e8de6898a217a31975bd7ed93c7c86fcfc59c2fbd063469a3734bb5372f9f82"} Nov 28 21:26:42 crc kubenswrapper[4957]: I1128 21:26:42.649545 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" podStartSLOduration=2.030163824 podStartE2EDuration="2.649522394s" podCreationTimestamp="2025-11-28 21:26:40 +0000 UTC" firstStartedPulling="2025-11-28 21:26:41.572965129 +0000 UTC m=+2241.041613048" lastFinishedPulling="2025-11-28 21:26:42.192323709 +0000 UTC m=+2241.660971618" observedRunningTime="2025-11-28 21:26:42.638313269 +0000 UTC m=+2242.106961198" watchObservedRunningTime="2025-11-28 21:26:42.649522394 +0000 UTC m=+2242.118170323" Nov 28 21:27:51 crc kubenswrapper[4957]: I1128 21:27:51.339730 4957 generic.go:334] "Generic (PLEG): container finished" podID="9938b0a7-21ab-4bb0-b689-6004bce90534" containerID="8e8de6898a217a31975bd7ed93c7c86fcfc59c2fbd063469a3734bb5372f9f82" exitCode=0 Nov 28 21:27:51 crc kubenswrapper[4957]: I1128 21:27:51.339853 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" event={"ID":"9938b0a7-21ab-4bb0-b689-6004bce90534","Type":"ContainerDied","Data":"8e8de6898a217a31975bd7ed93c7c86fcfc59c2fbd063469a3734bb5372f9f82"} Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.847793 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.951883 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ovn-combined-ca-bundle\") pod \"9938b0a7-21ab-4bb0-b689-6004bce90534\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.951959 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-inventory\") pod \"9938b0a7-21ab-4bb0-b689-6004bce90534\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.952003 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g7pp\" (UniqueName: \"kubernetes.io/projected/9938b0a7-21ab-4bb0-b689-6004bce90534-kube-api-access-2g7pp\") pod \"9938b0a7-21ab-4bb0-b689-6004bce90534\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.952143 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ssh-key\") pod \"9938b0a7-21ab-4bb0-b689-6004bce90534\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.952183 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9938b0a7-21ab-4bb0-b689-6004bce90534-ovncontroller-config-0\") pod \"9938b0a7-21ab-4bb0-b689-6004bce90534\" (UID: \"9938b0a7-21ab-4bb0-b689-6004bce90534\") " Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.958356 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9938b0a7-21ab-4bb0-b689-6004bce90534" (UID: "9938b0a7-21ab-4bb0-b689-6004bce90534"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.961580 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9938b0a7-21ab-4bb0-b689-6004bce90534-kube-api-access-2g7pp" (OuterVolumeSpecName: "kube-api-access-2g7pp") pod "9938b0a7-21ab-4bb0-b689-6004bce90534" (UID: "9938b0a7-21ab-4bb0-b689-6004bce90534"). InnerVolumeSpecName "kube-api-access-2g7pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.986722 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9938b0a7-21ab-4bb0-b689-6004bce90534" (UID: "9938b0a7-21ab-4bb0-b689-6004bce90534"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:27:52 crc kubenswrapper[4957]: I1128 21:27:52.998924 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9938b0a7-21ab-4bb0-b689-6004bce90534-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9938b0a7-21ab-4bb0-b689-6004bce90534" (UID: "9938b0a7-21ab-4bb0-b689-6004bce90534"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.000026 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-inventory" (OuterVolumeSpecName: "inventory") pod "9938b0a7-21ab-4bb0-b689-6004bce90534" (UID: "9938b0a7-21ab-4bb0-b689-6004bce90534"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.054748 4957 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.054787 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.054798 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g7pp\" (UniqueName: \"kubernetes.io/projected/9938b0a7-21ab-4bb0-b689-6004bce90534-kube-api-access-2g7pp\") on node \"crc\" DevicePath \"\"" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.054808 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9938b0a7-21ab-4bb0-b689-6004bce90534-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.054817 4957 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9938b0a7-21ab-4bb0-b689-6004bce90534-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.363018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" event={"ID":"9938b0a7-21ab-4bb0-b689-6004bce90534","Type":"ContainerDied","Data":"50b868c2de006c90720eabafd080843fc78e2397c7cef8a8bcedf37d5435b6b0"} Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.363290 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b868c2de006c90720eabafd080843fc78e2397c7cef8a8bcedf37d5435b6b0" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.363083 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5k6gg" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.457865 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j"] Nov 28 21:27:53 crc kubenswrapper[4957]: E1128 21:27:53.458414 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9938b0a7-21ab-4bb0-b689-6004bce90534" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.458432 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9938b0a7-21ab-4bb0-b689-6004bce90534" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.458647 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9938b0a7-21ab-4bb0-b689-6004bce90534" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.459403 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.461508 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.461729 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.461853 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.461966 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.462114 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.462246 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.468334 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j"] Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.566005 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.566076 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.566171 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.566283 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.566314 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv7t\" (UniqueName: \"kubernetes.io/projected/c14be992-6888-4d40-a63f-8ba6cbc0c837-kube-api-access-nzv7t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.566365 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.668994 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.669085 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.669147 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.669228 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.669272 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzv7t\" (UniqueName: \"kubernetes.io/projected/c14be992-6888-4d40-a63f-8ba6cbc0c837-kube-api-access-nzv7t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.669332 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.672854 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.673064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.673311 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.673543 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.674878 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.686741 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzv7t\" (UniqueName: \"kubernetes.io/projected/c14be992-6888-4d40-a63f-8ba6cbc0c837-kube-api-access-nzv7t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:53 crc kubenswrapper[4957]: I1128 21:27:53.785046 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:27:54 crc kubenswrapper[4957]: I1128 21:27:54.342348 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j"] Nov 28 21:27:54 crc kubenswrapper[4957]: I1128 21:27:54.379750 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" event={"ID":"c14be992-6888-4d40-a63f-8ba6cbc0c837","Type":"ContainerStarted","Data":"76bb97854a1e92b83092e49cb4b123981d3d2519394f9bc2a65c2c601d45a2e3"} Nov 28 21:27:55 crc kubenswrapper[4957]: I1128 21:27:55.394430 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" event={"ID":"c14be992-6888-4d40-a63f-8ba6cbc0c837","Type":"ContainerStarted","Data":"869f9700ef0b56924a301a1d9a3b35214d5f446708e5d3bbfd5873555ed90aa2"} Nov 28 21:27:55 crc kubenswrapper[4957]: I1128 21:27:55.416804 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" podStartSLOduration=1.895869453 podStartE2EDuration="2.416783479s" podCreationTimestamp="2025-11-28 21:27:53 +0000 UTC" firstStartedPulling="2025-11-28 21:27:54.353429429 +0000 UTC m=+2313.822077358" lastFinishedPulling="2025-11-28 21:27:54.874343465 +0000 UTC m=+2314.342991384" observedRunningTime="2025-11-28 21:27:55.407172284 +0000 UTC m=+2314.875820203" watchObservedRunningTime="2025-11-28 21:27:55.416783479 +0000 UTC m=+2314.885431388" Nov 28 21:28:38 crc kubenswrapper[4957]: I1128 21:28:38.992765 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:28:38 crc kubenswrapper[4957]: I1128 21:28:38.993292 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:28:49 crc kubenswrapper[4957]: I1128 21:28:49.007743 4957 generic.go:334] "Generic (PLEG): container finished" podID="c14be992-6888-4d40-a63f-8ba6cbc0c837" containerID="869f9700ef0b56924a301a1d9a3b35214d5f446708e5d3bbfd5873555ed90aa2" exitCode=0 Nov 28 21:28:49 crc kubenswrapper[4957]: I1128 21:28:49.007837 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" event={"ID":"c14be992-6888-4d40-a63f-8ba6cbc0c837","Type":"ContainerDied","Data":"869f9700ef0b56924a301a1d9a3b35214d5f446708e5d3bbfd5873555ed90aa2"} Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.526501 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.650472 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzv7t\" (UniqueName: \"kubernetes.io/projected/c14be992-6888-4d40-a63f-8ba6cbc0c837-kube-api-access-nzv7t\") pod \"c14be992-6888-4d40-a63f-8ba6cbc0c837\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.650531 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-ssh-key\") pod \"c14be992-6888-4d40-a63f-8ba6cbc0c837\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.650569 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-metadata-combined-ca-bundle\") pod \"c14be992-6888-4d40-a63f-8ba6cbc0c837\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.650596 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-nova-metadata-neutron-config-0\") pod \"c14be992-6888-4d40-a63f-8ba6cbc0c837\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.650714 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-inventory\") pod \"c14be992-6888-4d40-a63f-8ba6cbc0c837\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.650743 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c14be992-6888-4d40-a63f-8ba6cbc0c837\" (UID: \"c14be992-6888-4d40-a63f-8ba6cbc0c837\") " Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.658342 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c14be992-6888-4d40-a63f-8ba6cbc0c837" (UID: "c14be992-6888-4d40-a63f-8ba6cbc0c837"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.659039 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14be992-6888-4d40-a63f-8ba6cbc0c837-kube-api-access-nzv7t" (OuterVolumeSpecName: "kube-api-access-nzv7t") pod "c14be992-6888-4d40-a63f-8ba6cbc0c837" (UID: "c14be992-6888-4d40-a63f-8ba6cbc0c837"). InnerVolumeSpecName "kube-api-access-nzv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.690999 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c14be992-6888-4d40-a63f-8ba6cbc0c837" (UID: "c14be992-6888-4d40-a63f-8ba6cbc0c837"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.691247 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-inventory" (OuterVolumeSpecName: "inventory") pod "c14be992-6888-4d40-a63f-8ba6cbc0c837" (UID: "c14be992-6888-4d40-a63f-8ba6cbc0c837"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.692366 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c14be992-6888-4d40-a63f-8ba6cbc0c837" (UID: "c14be992-6888-4d40-a63f-8ba6cbc0c837"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.693242 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c14be992-6888-4d40-a63f-8ba6cbc0c837" (UID: "c14be992-6888-4d40-a63f-8ba6cbc0c837"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.754677 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.754719 4957 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.754738 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzv7t\" (UniqueName: \"kubernetes.io/projected/c14be992-6888-4d40-a63f-8ba6cbc0c837-kube-api-access-nzv7t\") on node \"crc\" DevicePath \"\"" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.754752 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.754765 4957 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:28:50 crc kubenswrapper[4957]: I1128 21:28:50.754777 4957 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c14be992-6888-4d40-a63f-8ba6cbc0c837-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.029628 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" event={"ID":"c14be992-6888-4d40-a63f-8ba6cbc0c837","Type":"ContainerDied","Data":"76bb97854a1e92b83092e49cb4b123981d3d2519394f9bc2a65c2c601d45a2e3"} Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.029669 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76bb97854a1e92b83092e49cb4b123981d3d2519394f9bc2a65c2c601d45a2e3" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.029677 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.146404 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l"] Nov 28 21:28:51 crc kubenswrapper[4957]: E1128 21:28:51.147114 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14be992-6888-4d40-a63f-8ba6cbc0c837" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.147144 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14be992-6888-4d40-a63f-8ba6cbc0c837" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.147571 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14be992-6888-4d40-a63f-8ba6cbc0c837" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.148634 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.151313 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.151525 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.152332 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.152535 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.152607 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.162430 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l"] Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.268137 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.268689 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.269230 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.269274 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.269415 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvxz\" (UniqueName: \"kubernetes.io/projected/a745c0d3-586f-4841-a3e4-08c009c85f9b-kube-api-access-wxvxz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.371750 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxvxz\" (UniqueName: \"kubernetes.io/projected/a745c0d3-586f-4841-a3e4-08c009c85f9b-kube-api-access-wxvxz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.371833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.371879 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.371937 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.371964 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.376868 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.377296 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.377309 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.377920 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.387273 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxvxz\" (UniqueName: \"kubernetes.io/projected/a745c0d3-586f-4841-a3e4-08c009c85f9b-kube-api-access-wxvxz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:51 crc kubenswrapper[4957]: I1128 21:28:51.487262 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:28:52 crc kubenswrapper[4957]: I1128 21:28:52.027143 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l"] Nov 28 21:28:53 crc kubenswrapper[4957]: I1128 21:28:53.048740 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" event={"ID":"a745c0d3-586f-4841-a3e4-08c009c85f9b","Type":"ContainerStarted","Data":"3a8968a9c9cd4ac89b7ea62548992333a118735f5038236785fb720f4e11ccfe"} Nov 28 21:28:53 crc kubenswrapper[4957]: I1128 21:28:53.049354 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" event={"ID":"a745c0d3-586f-4841-a3e4-08c009c85f9b","Type":"ContainerStarted","Data":"ebdc9857d01ad1a4b9550bbbee67117b2ef1ded2c995cb886746475ad6162823"} Nov 28 21:28:53 crc kubenswrapper[4957]: I1128 21:28:53.064994 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" podStartSLOduration=1.581722143 podStartE2EDuration="2.064980587s" podCreationTimestamp="2025-11-28 21:28:51 +0000 UTC" firstStartedPulling="2025-11-28 21:28:52.033872026 +0000 UTC m=+2371.502519945" lastFinishedPulling="2025-11-28 21:28:52.51713048 +0000 UTC m=+2371.985778389" observedRunningTime="2025-11-28 21:28:53.064314031 +0000 UTC m=+2372.532961950" watchObservedRunningTime="2025-11-28 21:28:53.064980587 +0000 UTC m=+2372.533628496" Nov 28 21:29:08 crc kubenswrapper[4957]: I1128 21:29:08.992303 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:29:08 crc kubenswrapper[4957]: I1128 21:29:08.992892 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:29:38 crc kubenswrapper[4957]: I1128 21:29:38.993028 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:29:38 crc kubenswrapper[4957]: I1128 21:29:38.994541 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:29:38 crc kubenswrapper[4957]: I1128 21:29:38.994732 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:29:38 crc kubenswrapper[4957]: I1128 21:29:38.995601 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:29:38 crc kubenswrapper[4957]: I1128 21:29:38.995726 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" gracePeriod=600 Nov 28 21:29:39 crc kubenswrapper[4957]: E1128 21:29:39.119329 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:29:39 crc kubenswrapper[4957]: I1128 21:29:39.569793 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" exitCode=0 Nov 28 21:29:39 crc kubenswrapper[4957]: I1128 21:29:39.569848 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d"} Nov 28 21:29:39 crc kubenswrapper[4957]: I1128 21:29:39.569886 4957 scope.go:117] "RemoveContainer" containerID="b3001db91bc32721628253d33092eb370a1a801675f87f706e745629d542dcc7" Nov 28 21:29:39 crc kubenswrapper[4957]: I1128 21:29:39.570509 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:29:39 crc kubenswrapper[4957]: E1128 21:29:39.570910 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:29:53 crc kubenswrapper[4957]: I1128 21:29:53.813748 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:29:53 crc kubenswrapper[4957]: E1128 21:29:53.815648 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.145023 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq"] Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.147190 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.153453 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.153458 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.166753 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq"] Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.257587 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/cbd4f374-1a16-4e94-ab55-5463be9dff02-kube-api-access-kr7kg\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.257634 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbd4f374-1a16-4e94-ab55-5463be9dff02-config-volume\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.257680 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbd4f374-1a16-4e94-ab55-5463be9dff02-secret-volume\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.360412 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/cbd4f374-1a16-4e94-ab55-5463be9dff02-kube-api-access-kr7kg\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.360468 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbd4f374-1a16-4e94-ab55-5463be9dff02-config-volume\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.360515 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbd4f374-1a16-4e94-ab55-5463be9dff02-secret-volume\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.361489 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbd4f374-1a16-4e94-ab55-5463be9dff02-config-volume\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.367984 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbd4f374-1a16-4e94-ab55-5463be9dff02-secret-volume\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.377748 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/cbd4f374-1a16-4e94-ab55-5463be9dff02-kube-api-access-kr7kg\") pod \"collect-profiles-29406090-2rmnq\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.469919 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:00 crc kubenswrapper[4957]: I1128 21:30:00.963324 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq"] Nov 28 21:30:01 crc kubenswrapper[4957]: I1128 21:30:01.793376 4957 generic.go:334] "Generic (PLEG): container finished" podID="cbd4f374-1a16-4e94-ab55-5463be9dff02" containerID="73320f6106000424242a5077c8c8446a90172be2fca3eb9849e5fb6eb4c26862" exitCode=0 Nov 28 21:30:01 crc kubenswrapper[4957]: I1128 21:30:01.793432 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" event={"ID":"cbd4f374-1a16-4e94-ab55-5463be9dff02","Type":"ContainerDied","Data":"73320f6106000424242a5077c8c8446a90172be2fca3eb9849e5fb6eb4c26862"} Nov 28 21:30:01 crc kubenswrapper[4957]: I1128 21:30:01.793896 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" event={"ID":"cbd4f374-1a16-4e94-ab55-5463be9dff02","Type":"ContainerStarted","Data":"c5f69474cb3af6e225e7c2d55e1bcfb9489174fa5da246eed14158678dce7c0e"} Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.299713 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.357511 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbd4f374-1a16-4e94-ab55-5463be9dff02-config-volume\") pod \"cbd4f374-1a16-4e94-ab55-5463be9dff02\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.357999 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbd4f374-1a16-4e94-ab55-5463be9dff02-secret-volume\") pod \"cbd4f374-1a16-4e94-ab55-5463be9dff02\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.358291 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/cbd4f374-1a16-4e94-ab55-5463be9dff02-kube-api-access-kr7kg\") pod \"cbd4f374-1a16-4e94-ab55-5463be9dff02\" (UID: \"cbd4f374-1a16-4e94-ab55-5463be9dff02\") " Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.358664 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbd4f374-1a16-4e94-ab55-5463be9dff02-config-volume" (OuterVolumeSpecName: "config-volume") pod "cbd4f374-1a16-4e94-ab55-5463be9dff02" (UID: "cbd4f374-1a16-4e94-ab55-5463be9dff02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.368546 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbd4f374-1a16-4e94-ab55-5463be9dff02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cbd4f374-1a16-4e94-ab55-5463be9dff02" (UID: "cbd4f374-1a16-4e94-ab55-5463be9dff02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.368630 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd4f374-1a16-4e94-ab55-5463be9dff02-kube-api-access-kr7kg" (OuterVolumeSpecName: "kube-api-access-kr7kg") pod "cbd4f374-1a16-4e94-ab55-5463be9dff02" (UID: "cbd4f374-1a16-4e94-ab55-5463be9dff02"). InnerVolumeSpecName "kube-api-access-kr7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.370995 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbd4f374-1a16-4e94-ab55-5463be9dff02-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.371180 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbd4f374-1a16-4e94-ab55-5463be9dff02-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.371321 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr7kg\" (UniqueName: \"kubernetes.io/projected/cbd4f374-1a16-4e94-ab55-5463be9dff02-kube-api-access-kr7kg\") on node \"crc\" DevicePath \"\"" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.833061 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" event={"ID":"cbd4f374-1a16-4e94-ab55-5463be9dff02","Type":"ContainerDied","Data":"c5f69474cb3af6e225e7c2d55e1bcfb9489174fa5da246eed14158678dce7c0e"} Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.833110 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f69474cb3af6e225e7c2d55e1bcfb9489174fa5da246eed14158678dce7c0e" Nov 28 21:30:03 crc kubenswrapper[4957]: I1128 21:30:03.833148 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq" Nov 28 21:30:04 crc kubenswrapper[4957]: I1128 21:30:04.399069 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6"] Nov 28 21:30:04 crc kubenswrapper[4957]: I1128 21:30:04.408965 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406045-99lc6"] Nov 28 21:30:04 crc kubenswrapper[4957]: I1128 21:30:04.828499 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19f4f47-257a-4269-96f3-e8892c939e0b" path="/var/lib/kubelet/pods/d19f4f47-257a-4269-96f3-e8892c939e0b/volumes" Nov 28 21:30:05 crc kubenswrapper[4957]: I1128 21:30:05.813816 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:30:05 crc kubenswrapper[4957]: E1128 21:30:05.814407 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:30:20 crc kubenswrapper[4957]: I1128 21:30:20.824424 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:30:20 crc kubenswrapper[4957]: E1128 21:30:20.826443 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:30:28 crc kubenswrapper[4957]: I1128 21:30:28.878376 4957 scope.go:117] "RemoveContainer" containerID="d8a988f2d0674539ff0ec68ffa9b3f7fb21a767dd8ad81cba0c1fe76b607ecb4" Nov 28 21:30:31 crc kubenswrapper[4957]: I1128 21:30:31.813203 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:30:31 crc kubenswrapper[4957]: E1128 21:30:31.815257 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:30:42 crc kubenswrapper[4957]: I1128 21:30:42.814137 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:30:42 crc kubenswrapper[4957]: E1128 21:30:42.814909 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:30:56 crc kubenswrapper[4957]: I1128 21:30:56.813948 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:30:56 crc kubenswrapper[4957]: E1128 21:30:56.814750 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:31:10 crc kubenswrapper[4957]: I1128 21:31:10.820963 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:31:10 crc kubenswrapper[4957]: E1128 21:31:10.821890 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:31:25 crc kubenswrapper[4957]: I1128 21:31:25.813435 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:31:25 crc kubenswrapper[4957]: E1128 21:31:25.814702 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:31:40 crc kubenswrapper[4957]: I1128 21:31:40.820722 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:31:40 crc kubenswrapper[4957]: E1128 21:31:40.821495 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.920716 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z55tq"] Nov 28 21:31:44 crc kubenswrapper[4957]: E1128 21:31:44.923431 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd4f374-1a16-4e94-ab55-5463be9dff02" containerName="collect-profiles" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.923602 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4f374-1a16-4e94-ab55-5463be9dff02" containerName="collect-profiles" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.923918 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd4f374-1a16-4e94-ab55-5463be9dff02" containerName="collect-profiles" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.926631 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.938599 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z55tq"] Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.968497 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-kube-api-access-g5tbr\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.968749 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-utilities\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:44 crc kubenswrapper[4957]: I1128 21:31:44.969385 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-catalog-content\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.071160 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-utilities\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.071312 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-catalog-content\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.071413 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-kube-api-access-g5tbr\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.071946 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-utilities\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.072000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-catalog-content\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.090866 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-kube-api-access-g5tbr\") pod \"redhat-operators-z55tq\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:45 crc kubenswrapper[4957]: I1128 21:31:45.285876 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:46 crc kubenswrapper[4957]: I1128 21:31:46.188947 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z55tq"] Nov 28 21:31:46 crc kubenswrapper[4957]: I1128 21:31:46.966565 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerID="e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2" exitCode=0 Nov 28 21:31:46 crc kubenswrapper[4957]: I1128 21:31:46.966605 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerDied","Data":"e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2"} Nov 28 21:31:46 crc kubenswrapper[4957]: I1128 21:31:46.966836 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerStarted","Data":"8996da106a06045e30c54f582fd06856cd1eff55fcabe732f6f330489295c968"} Nov 28 21:31:46 crc kubenswrapper[4957]: I1128 21:31:46.970281 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:31:47 crc kubenswrapper[4957]: I1128 21:31:47.983026 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerStarted","Data":"5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d"} Nov 28 21:31:51 crc kubenswrapper[4957]: I1128 21:31:51.016393 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerID="5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d" exitCode=0 Nov 28 21:31:51 crc kubenswrapper[4957]: I1128 21:31:51.016488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerDied","Data":"5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d"} Nov 28 21:31:52 crc kubenswrapper[4957]: I1128 21:31:52.030364 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerStarted","Data":"7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2"} Nov 28 21:31:52 crc kubenswrapper[4957]: I1128 21:31:52.050848 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z55tq" podStartSLOduration=3.378636167 podStartE2EDuration="8.050829541s" podCreationTimestamp="2025-11-28 21:31:44 +0000 UTC" firstStartedPulling="2025-11-28 21:31:46.96883401 +0000 UTC m=+2546.437481919" lastFinishedPulling="2025-11-28 21:31:51.641027384 +0000 UTC m=+2551.109675293" observedRunningTime="2025-11-28 21:31:52.047550461 +0000 UTC m=+2551.516198380" watchObservedRunningTime="2025-11-28 21:31:52.050829541 +0000 UTC m=+2551.519477450" Nov 28 21:31:53 crc kubenswrapper[4957]: I1128 21:31:53.813147 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:31:53 crc kubenswrapper[4957]: E1128 21:31:53.813786 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:31:55 crc kubenswrapper[4957]: I1128 21:31:55.286496 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:55 crc kubenswrapper[4957]: I1128 21:31:55.286784 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:31:56 crc kubenswrapper[4957]: I1128 21:31:56.341245 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z55tq" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="registry-server" probeResult="failure" output=< Nov 28 21:31:56 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 21:31:56 crc kubenswrapper[4957]: > Nov 28 21:32:05 crc kubenswrapper[4957]: I1128 21:32:05.356463 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:32:05 crc kubenswrapper[4957]: I1128 21:32:05.432692 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:32:05 crc kubenswrapper[4957]: I1128 21:32:05.606015 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z55tq"] Nov 28 21:32:06 crc kubenswrapper[4957]: I1128 21:32:06.813661 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:32:06 crc kubenswrapper[4957]: E1128 21:32:06.814672 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.206879 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z55tq" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="registry-server" containerID="cri-o://7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2" gracePeriod=2 Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.690511 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.789804 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-utilities\") pod \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.789998 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-kube-api-access-g5tbr\") pod \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.790040 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-catalog-content\") pod \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\" (UID: \"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6\") " Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.791185 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-utilities" (OuterVolumeSpecName: "utilities") pod "f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" (UID: "f4dd88ec-83e8-4a35-a01d-7d5bba512bd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.795494 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-kube-api-access-g5tbr" (OuterVolumeSpecName: "kube-api-access-g5tbr") pod "f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" (UID: "f4dd88ec-83e8-4a35-a01d-7d5bba512bd6"). InnerVolumeSpecName "kube-api-access-g5tbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.889685 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" (UID: "f4dd88ec-83e8-4a35-a01d-7d5bba512bd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.894454 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.894490 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tbr\" (UniqueName: \"kubernetes.io/projected/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-kube-api-access-g5tbr\") on node \"crc\" DevicePath \"\"" Nov 28 21:32:07 crc kubenswrapper[4957]: I1128 21:32:07.894502 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.219617 4957 generic.go:334] "Generic (PLEG): container finished" podID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerID="7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2" exitCode=0 Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.219927 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerDied","Data":"7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2"} Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.219952 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z55tq" event={"ID":"f4dd88ec-83e8-4a35-a01d-7d5bba512bd6","Type":"ContainerDied","Data":"8996da106a06045e30c54f582fd06856cd1eff55fcabe732f6f330489295c968"} Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.219968 4957 scope.go:117] "RemoveContainer" containerID="7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.220083 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z55tq" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.248689 4957 scope.go:117] "RemoveContainer" containerID="5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.266156 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z55tq"] Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.280133 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z55tq"] Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.287258 4957 scope.go:117] "RemoveContainer" containerID="e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.343154 4957 scope.go:117] "RemoveContainer" containerID="7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2" Nov 28 21:32:08 crc kubenswrapper[4957]: E1128 21:32:08.343724 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2\": container with ID starting with 7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2 not found: ID does not exist" containerID="7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.343764 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2"} err="failed to get container status \"7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2\": rpc error: code = NotFound desc = could not find container \"7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2\": container with ID starting with 7a422ab9c8664897456262b18c7f91b0a5ee6259ef52b44bdddaefcb530862e2 not found: ID does not exist" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.343789 4957 scope.go:117] "RemoveContainer" containerID="5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d" Nov 28 21:32:08 crc kubenswrapper[4957]: E1128 21:32:08.344044 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d\": container with ID starting with 5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d not found: ID does not exist" containerID="5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.344136 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d"} err="failed to get container status \"5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d\": rpc error: code = NotFound desc = could not find container \"5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d\": container with ID starting with 5713f8d3e5b018170d5b91a7e3aeed7253d7357f954e245a086af81ede6eb61d not found: ID does not exist" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.344227 4957 scope.go:117] "RemoveContainer" containerID="e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2" Nov 28 21:32:08 crc kubenswrapper[4957]: E1128 21:32:08.344514 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2\": container with ID starting with e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2 not found: ID does not exist" containerID="e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.344613 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2"} err="failed to get container status \"e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2\": rpc error: code = NotFound desc = could not find container \"e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2\": container with ID starting with e6197a5ed0162395eb2cb7e7bd5e1a42bdabbb988274830575a0b17edf5b0cf2 not found: ID does not exist" Nov 28 21:32:08 crc kubenswrapper[4957]: I1128 21:32:08.830127 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" path="/var/lib/kubelet/pods/f4dd88ec-83e8-4a35-a01d-7d5bba512bd6/volumes" Nov 28 21:32:18 crc kubenswrapper[4957]: I1128 21:32:18.815044 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:32:18 crc kubenswrapper[4957]: E1128 21:32:18.815905 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:32:31 crc kubenswrapper[4957]: I1128 21:32:31.813862 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:32:31 crc kubenswrapper[4957]: E1128 21:32:31.814682 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:32:46 crc kubenswrapper[4957]: I1128 21:32:46.813425 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:32:46 crc kubenswrapper[4957]: E1128 21:32:46.814281 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:33:00 crc kubenswrapper[4957]: I1128 21:33:00.821459 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:33:00 crc kubenswrapper[4957]: E1128 21:33:00.822263 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:33:14 crc kubenswrapper[4957]: I1128 21:33:14.812611 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:33:14 crc kubenswrapper[4957]: E1128 21:33:14.813452 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:33:28 crc kubenswrapper[4957]: I1128 21:33:28.814544 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:33:28 crc kubenswrapper[4957]: E1128 21:33:28.815330 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:33:35 crc kubenswrapper[4957]: I1128 21:33:35.181058 4957 generic.go:334] "Generic (PLEG): container finished" podID="a745c0d3-586f-4841-a3e4-08c009c85f9b" containerID="3a8968a9c9cd4ac89b7ea62548992333a118735f5038236785fb720f4e11ccfe" exitCode=0 Nov 28 21:33:35 crc kubenswrapper[4957]: I1128 21:33:35.181148 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" event={"ID":"a745c0d3-586f-4841-a3e4-08c009c85f9b","Type":"ContainerDied","Data":"3a8968a9c9cd4ac89b7ea62548992333a118735f5038236785fb720f4e11ccfe"} Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.692804 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.856269 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxvxz\" (UniqueName: \"kubernetes.io/projected/a745c0d3-586f-4841-a3e4-08c009c85f9b-kube-api-access-wxvxz\") pod \"a745c0d3-586f-4841-a3e4-08c009c85f9b\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.856556 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-combined-ca-bundle\") pod \"a745c0d3-586f-4841-a3e4-08c009c85f9b\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.856628 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-secret-0\") pod \"a745c0d3-586f-4841-a3e4-08c009c85f9b\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.856653 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-ssh-key\") pod \"a745c0d3-586f-4841-a3e4-08c009c85f9b\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.856677 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-inventory\") pod \"a745c0d3-586f-4841-a3e4-08c009c85f9b\" (UID: \"a745c0d3-586f-4841-a3e4-08c009c85f9b\") " Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.862098 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a745c0d3-586f-4841-a3e4-08c009c85f9b" (UID: "a745c0d3-586f-4841-a3e4-08c009c85f9b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.862168 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a745c0d3-586f-4841-a3e4-08c009c85f9b-kube-api-access-wxvxz" (OuterVolumeSpecName: "kube-api-access-wxvxz") pod "a745c0d3-586f-4841-a3e4-08c009c85f9b" (UID: "a745c0d3-586f-4841-a3e4-08c009c85f9b"). InnerVolumeSpecName "kube-api-access-wxvxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.887765 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a745c0d3-586f-4841-a3e4-08c009c85f9b" (UID: "a745c0d3-586f-4841-a3e4-08c009c85f9b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.893695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a745c0d3-586f-4841-a3e4-08c009c85f9b" (UID: "a745c0d3-586f-4841-a3e4-08c009c85f9b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.895669 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-inventory" (OuterVolumeSpecName: "inventory") pod "a745c0d3-586f-4841-a3e4-08c009c85f9b" (UID: "a745c0d3-586f-4841-a3e4-08c009c85f9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.961513 4957 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.961554 4957 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.961567 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.961578 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a745c0d3-586f-4841-a3e4-08c009c85f9b-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:33:36 crc kubenswrapper[4957]: I1128 21:33:36.961591 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxvxz\" (UniqueName: \"kubernetes.io/projected/a745c0d3-586f-4841-a3e4-08c009c85f9b-kube-api-access-wxvxz\") on node \"crc\" DevicePath \"\"" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.205517 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" event={"ID":"a745c0d3-586f-4841-a3e4-08c009c85f9b","Type":"ContainerDied","Data":"ebdc9857d01ad1a4b9550bbbee67117b2ef1ded2c995cb886746475ad6162823"} Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.205556 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebdc9857d01ad1a4b9550bbbee67117b2ef1ded2c995cb886746475ad6162823" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.205552 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.294055 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr"] Nov 28 21:33:37 crc kubenswrapper[4957]: E1128 21:33:37.294698 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="extract-utilities" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.294724 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="extract-utilities" Nov 28 21:33:37 crc kubenswrapper[4957]: E1128 21:33:37.294777 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="extract-content" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.294786 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="extract-content" Nov 28 21:33:37 crc kubenswrapper[4957]: E1128 21:33:37.294807 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a745c0d3-586f-4841-a3e4-08c009c85f9b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.294819 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a745c0d3-586f-4841-a3e4-08c009c85f9b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 21:33:37 crc kubenswrapper[4957]: E1128 21:33:37.294835 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="registry-server" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.294844 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="registry-server" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.295117 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a745c0d3-586f-4841-a3e4-08c009c85f9b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.295143 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dd88ec-83e8-4a35-a01d-7d5bba512bd6" containerName="registry-server" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.296126 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.298675 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.298781 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.298994 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.299053 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.299082 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.299359 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.301787 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.305910 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr"] Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472417 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472487 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472646 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472673 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472698 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472725 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvpt\" (UniqueName: \"kubernetes.io/projected/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-kube-api-access-qwvpt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472776 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472805 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.472821 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575351 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575419 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575454 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575485 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvpt\" (UniqueName: \"kubernetes.io/projected/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-kube-api-access-qwvpt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575507 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575540 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575555 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575625 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.575674 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.577203 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.579075 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.579674 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.580029 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.580200 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.580282 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.580825 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.585598 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.598819 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvpt\" (UniqueName: \"kubernetes.io/projected/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-kube-api-access-qwvpt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rdwmr\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:37 crc kubenswrapper[4957]: I1128 21:33:37.629903 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:33:38 crc kubenswrapper[4957]: W1128 21:33:38.184608 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd17490c8_1d2b_43d6_aefe_bcbc181d72aa.slice/crio-33377c43a30693040555cc09d7004bdaf565d6352b2c23f5ea86d6d675d144ae WatchSource:0}: Error finding container 33377c43a30693040555cc09d7004bdaf565d6352b2c23f5ea86d6d675d144ae: Status 404 returned error can't find the container with id 33377c43a30693040555cc09d7004bdaf565d6352b2c23f5ea86d6d675d144ae Nov 28 21:33:38 crc kubenswrapper[4957]: I1128 21:33:38.190904 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr"] Nov 28 21:33:38 crc kubenswrapper[4957]: I1128 21:33:38.221911 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" event={"ID":"d17490c8-1d2b-43d6-aefe-bcbc181d72aa","Type":"ContainerStarted","Data":"33377c43a30693040555cc09d7004bdaf565d6352b2c23f5ea86d6d675d144ae"} Nov 28 21:33:39 crc kubenswrapper[4957]: I1128 21:33:39.812661 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:33:39 crc kubenswrapper[4957]: E1128 21:33:39.813434 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:33:40 crc kubenswrapper[4957]: I1128 21:33:40.245674 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" event={"ID":"d17490c8-1d2b-43d6-aefe-bcbc181d72aa","Type":"ContainerStarted","Data":"8f05251790fd1c5ab2ce31868ccb48c0a038e32d8d3da5401d2326472a2cd26f"} Nov 28 21:33:40 crc kubenswrapper[4957]: I1128 21:33:40.280017 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" podStartSLOduration=2.449871253 podStartE2EDuration="3.279991782s" podCreationTimestamp="2025-11-28 21:33:37 +0000 UTC" firstStartedPulling="2025-11-28 21:33:38.19086155 +0000 UTC m=+2657.659509459" lastFinishedPulling="2025-11-28 21:33:39.020982089 +0000 UTC m=+2658.489629988" observedRunningTime="2025-11-28 21:33:40.26396869 +0000 UTC m=+2659.732616609" watchObservedRunningTime="2025-11-28 21:33:40.279991782 +0000 UTC m=+2659.748639691" Nov 28 21:33:53 crc kubenswrapper[4957]: I1128 21:33:53.813408 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:33:53 crc kubenswrapper[4957]: E1128 21:33:53.814207 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:34:04 crc kubenswrapper[4957]: I1128 21:34:04.813589 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:34:04 crc kubenswrapper[4957]: E1128 21:34:04.814240 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:34:15 crc kubenswrapper[4957]: I1128 21:34:15.813527 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:34:15 crc kubenswrapper[4957]: E1128 21:34:15.814577 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:34:29 crc kubenswrapper[4957]: I1128 21:34:29.812481 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:34:29 crc kubenswrapper[4957]: E1128 21:34:29.813196 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:34:43 crc kubenswrapper[4957]: I1128 21:34:43.812959 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:34:44 crc kubenswrapper[4957]: I1128 21:34:44.985746 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"b857508c825d740f7d13d228d4d1d69689441e28a74437c10b1674960e0710d3"} Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.564495 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cd2qg"] Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.568199 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.594783 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd2qg"] Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.681656 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpr6q\" (UniqueName: \"kubernetes.io/projected/25f9cc68-96c5-4010-86a7-0026b8dea173-kube-api-access-cpr6q\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.681902 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-catalog-content\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.682237 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-utilities\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.784455 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpr6q\" (UniqueName: \"kubernetes.io/projected/25f9cc68-96c5-4010-86a7-0026b8dea173-kube-api-access-cpr6q\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.784939 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-catalog-content\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.785103 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-utilities\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.785531 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-utilities\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.785535 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-catalog-content\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.802859 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpr6q\" (UniqueName: \"kubernetes.io/projected/25f9cc68-96c5-4010-86a7-0026b8dea173-kube-api-access-cpr6q\") pod \"certified-operators-cd2qg\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:49 crc kubenswrapper[4957]: I1128 21:34:49.894218 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:50 crc kubenswrapper[4957]: I1128 21:34:50.403798 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd2qg"] Nov 28 21:34:51 crc kubenswrapper[4957]: I1128 21:34:51.042434 4957 generic.go:334] "Generic (PLEG): container finished" podID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerID="57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f" exitCode=0 Nov 28 21:34:51 crc kubenswrapper[4957]: I1128 21:34:51.042647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerDied","Data":"57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f"} Nov 28 21:34:51 crc kubenswrapper[4957]: I1128 21:34:51.042752 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerStarted","Data":"35603c20c95af27ba2387f4843ab51a9502277f58c50a6dc01b19a44b2bd3f01"} Nov 28 21:34:52 crc kubenswrapper[4957]: I1128 21:34:52.063993 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerStarted","Data":"b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5"} Nov 28 21:34:53 crc kubenswrapper[4957]: I1128 21:34:53.077167 4957 generic.go:334] "Generic (PLEG): container finished" podID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerID="b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5" exitCode=0 Nov 28 21:34:53 crc kubenswrapper[4957]: I1128 21:34:53.077253 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerDied","Data":"b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5"} Nov 28 21:34:54 crc kubenswrapper[4957]: I1128 21:34:54.089334 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerStarted","Data":"ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3"} Nov 28 21:34:54 crc kubenswrapper[4957]: I1128 21:34:54.119193 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cd2qg" podStartSLOduration=2.506824261 podStartE2EDuration="5.119172619s" podCreationTimestamp="2025-11-28 21:34:49 +0000 UTC" firstStartedPulling="2025-11-28 21:34:51.044484622 +0000 UTC m=+2730.513132531" lastFinishedPulling="2025-11-28 21:34:53.65683299 +0000 UTC m=+2733.125480889" observedRunningTime="2025-11-28 21:34:54.106303525 +0000 UTC m=+2733.574951434" watchObservedRunningTime="2025-11-28 21:34:54.119172619 +0000 UTC m=+2733.587820528" Nov 28 21:34:59 crc kubenswrapper[4957]: I1128 21:34:59.894739 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:59 crc kubenswrapper[4957]: I1128 21:34:59.895341 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:34:59 crc kubenswrapper[4957]: I1128 21:34:59.949101 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.217489 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.765221 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjc2l"] Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.768318 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.783644 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjc2l"] Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.855424 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-utilities\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.855549 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-catalog-content\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.855574 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cknhd\" (UniqueName: \"kubernetes.io/projected/536c4117-26cb-45b6-b085-52e58a71358f-kube-api-access-cknhd\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.956430 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd2qg"] Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.957714 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-utilities\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.957814 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-catalog-content\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.957836 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cknhd\" (UniqueName: \"kubernetes.io/projected/536c4117-26cb-45b6-b085-52e58a71358f-kube-api-access-cknhd\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.958604 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-catalog-content\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.958671 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-utilities\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:00 crc kubenswrapper[4957]: I1128 21:35:00.987328 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cknhd\" (UniqueName: \"kubernetes.io/projected/536c4117-26cb-45b6-b085-52e58a71358f-kube-api-access-cknhd\") pod \"community-operators-qjc2l\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:01 crc kubenswrapper[4957]: I1128 21:35:01.111809 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:01 crc kubenswrapper[4957]: I1128 21:35:01.675454 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjc2l"] Nov 28 21:35:02 crc kubenswrapper[4957]: I1128 21:35:02.191438 4957 generic.go:334] "Generic (PLEG): container finished" podID="536c4117-26cb-45b6-b085-52e58a71358f" containerID="fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8" exitCode=0 Nov 28 21:35:02 crc kubenswrapper[4957]: I1128 21:35:02.191901 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cd2qg" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="registry-server" containerID="cri-o://ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3" gracePeriod=2 Nov 28 21:35:02 crc kubenswrapper[4957]: I1128 21:35:02.191969 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerDied","Data":"fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8"} Nov 28 21:35:02 crc kubenswrapper[4957]: I1128 21:35:02.191991 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerStarted","Data":"e1480ba9c9fea28f35d6c52c8127ce2ec414e5e88f10623b5a1d2ac892372fb9"} Nov 28 21:35:02 crc kubenswrapper[4957]: I1128 21:35:02.904947 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.025775 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpr6q\" (UniqueName: \"kubernetes.io/projected/25f9cc68-96c5-4010-86a7-0026b8dea173-kube-api-access-cpr6q\") pod \"25f9cc68-96c5-4010-86a7-0026b8dea173\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.025917 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-catalog-content\") pod \"25f9cc68-96c5-4010-86a7-0026b8dea173\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.026299 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-utilities\") pod \"25f9cc68-96c5-4010-86a7-0026b8dea173\" (UID: \"25f9cc68-96c5-4010-86a7-0026b8dea173\") " Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.027312 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-utilities" (OuterVolumeSpecName: "utilities") pod "25f9cc68-96c5-4010-86a7-0026b8dea173" (UID: "25f9cc68-96c5-4010-86a7-0026b8dea173"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.028625 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.044934 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f9cc68-96c5-4010-86a7-0026b8dea173-kube-api-access-cpr6q" (OuterVolumeSpecName: "kube-api-access-cpr6q") pod "25f9cc68-96c5-4010-86a7-0026b8dea173" (UID: "25f9cc68-96c5-4010-86a7-0026b8dea173"). InnerVolumeSpecName "kube-api-access-cpr6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.083079 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25f9cc68-96c5-4010-86a7-0026b8dea173" (UID: "25f9cc68-96c5-4010-86a7-0026b8dea173"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.130719 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpr6q\" (UniqueName: \"kubernetes.io/projected/25f9cc68-96c5-4010-86a7-0026b8dea173-kube-api-access-cpr6q\") on node \"crc\" DevicePath \"\"" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.130755 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f9cc68-96c5-4010-86a7-0026b8dea173-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.231182 4957 generic.go:334] "Generic (PLEG): container finished" podID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerID="ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3" exitCode=0 Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.231239 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerDied","Data":"ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3"} Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.231267 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd2qg" event={"ID":"25f9cc68-96c5-4010-86a7-0026b8dea173","Type":"ContainerDied","Data":"35603c20c95af27ba2387f4843ab51a9502277f58c50a6dc01b19a44b2bd3f01"} Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.231302 4957 scope.go:117] "RemoveContainer" containerID="ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.231334 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd2qg" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.295714 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd2qg"] Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.306069 4957 scope.go:117] "RemoveContainer" containerID="b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.306863 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cd2qg"] Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.332912 4957 scope.go:117] "RemoveContainer" containerID="57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.408781 4957 scope.go:117] "RemoveContainer" containerID="ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3" Nov 28 21:35:03 crc kubenswrapper[4957]: E1128 21:35:03.409619 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3\": container with ID starting with ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3 not found: ID does not exist" containerID="ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.409652 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3"} err="failed to get container status \"ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3\": rpc error: code = NotFound desc = could not find container \"ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3\": container with ID starting with ff38a9d891b3e73abc4c12f56aa54f06fd248e56fdfcfd10f94ad0efeb6ebcd3 not found: ID does not exist" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.409672 4957 scope.go:117] "RemoveContainer" containerID="b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5" Nov 28 21:35:03 crc kubenswrapper[4957]: E1128 21:35:03.410036 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5\": container with ID starting with b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5 not found: ID does not exist" containerID="b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.410057 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5"} err="failed to get container status \"b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5\": rpc error: code = NotFound desc = could not find container \"b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5\": container with ID starting with b1f572cfd7a5a5436c282aa415e81a8a2b3040272827b050473195c39de04dd5 not found: ID does not exist" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.410068 4957 scope.go:117] "RemoveContainer" containerID="57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f" Nov 28 21:35:03 crc kubenswrapper[4957]: E1128 21:35:03.410330 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f\": container with ID starting with 57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f not found: ID does not exist" containerID="57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f" Nov 28 21:35:03 crc kubenswrapper[4957]: I1128 21:35:03.410349 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f"} err="failed to get container status \"57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f\": rpc error: code = NotFound desc = could not find container \"57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f\": container with ID starting with 57a1e89ed89d46406bfc64443d23a2a5aac74f3d03b63b2339b16517edc0ab2f not found: ID does not exist" Nov 28 21:35:04 crc kubenswrapper[4957]: I1128 21:35:04.251924 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerStarted","Data":"dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950"} Nov 28 21:35:04 crc kubenswrapper[4957]: I1128 21:35:04.848478 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" path="/var/lib/kubelet/pods/25f9cc68-96c5-4010-86a7-0026b8dea173/volumes" Nov 28 21:35:05 crc kubenswrapper[4957]: I1128 21:35:05.263677 4957 generic.go:334] "Generic (PLEG): container finished" podID="536c4117-26cb-45b6-b085-52e58a71358f" containerID="dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950" exitCode=0 Nov 28 21:35:05 crc kubenswrapper[4957]: I1128 21:35:05.263773 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerDied","Data":"dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950"} Nov 28 21:35:06 crc kubenswrapper[4957]: I1128 21:35:06.278017 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerStarted","Data":"39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a"} Nov 28 21:35:06 crc kubenswrapper[4957]: I1128 21:35:06.305596 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjc2l" podStartSLOduration=2.660622262 podStartE2EDuration="6.305574994s" podCreationTimestamp="2025-11-28 21:35:00 +0000 UTC" firstStartedPulling="2025-11-28 21:35:02.192958723 +0000 UTC m=+2741.661606622" lastFinishedPulling="2025-11-28 21:35:05.837911445 +0000 UTC m=+2745.306559354" observedRunningTime="2025-11-28 21:35:06.294932435 +0000 UTC m=+2745.763580354" watchObservedRunningTime="2025-11-28 21:35:06.305574994 +0000 UTC m=+2745.774222913" Nov 28 21:35:11 crc kubenswrapper[4957]: I1128 21:35:11.115456 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:11 crc kubenswrapper[4957]: I1128 21:35:11.116168 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:11 crc kubenswrapper[4957]: I1128 21:35:11.214837 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:11 crc kubenswrapper[4957]: I1128 21:35:11.403833 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:11 crc kubenswrapper[4957]: I1128 21:35:11.451761 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjc2l"] Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.376198 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qjc2l" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="registry-server" containerID="cri-o://39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a" gracePeriod=2 Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.904448 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.987821 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-utilities\") pod \"536c4117-26cb-45b6-b085-52e58a71358f\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.987915 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-catalog-content\") pod \"536c4117-26cb-45b6-b085-52e58a71358f\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.988049 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cknhd\" (UniqueName: \"kubernetes.io/projected/536c4117-26cb-45b6-b085-52e58a71358f-kube-api-access-cknhd\") pod \"536c4117-26cb-45b6-b085-52e58a71358f\" (UID: \"536c4117-26cb-45b6-b085-52e58a71358f\") " Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.995169 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-utilities" (OuterVolumeSpecName: "utilities") pod "536c4117-26cb-45b6-b085-52e58a71358f" (UID: "536c4117-26cb-45b6-b085-52e58a71358f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:35:13 crc kubenswrapper[4957]: I1128 21:35:13.999396 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536c4117-26cb-45b6-b085-52e58a71358f-kube-api-access-cknhd" (OuterVolumeSpecName: "kube-api-access-cknhd") pod "536c4117-26cb-45b6-b085-52e58a71358f" (UID: "536c4117-26cb-45b6-b085-52e58a71358f"). InnerVolumeSpecName "kube-api-access-cknhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.043277 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "536c4117-26cb-45b6-b085-52e58a71358f" (UID: "536c4117-26cb-45b6-b085-52e58a71358f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.090738 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.090769 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cknhd\" (UniqueName: \"kubernetes.io/projected/536c4117-26cb-45b6-b085-52e58a71358f-kube-api-access-cknhd\") on node \"crc\" DevicePath \"\"" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.090781 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536c4117-26cb-45b6-b085-52e58a71358f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.389082 4957 generic.go:334] "Generic (PLEG): container finished" podID="536c4117-26cb-45b6-b085-52e58a71358f" containerID="39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a" exitCode=0 Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.389132 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerDied","Data":"39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a"} Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.389154 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjc2l" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.389176 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjc2l" event={"ID":"536c4117-26cb-45b6-b085-52e58a71358f","Type":"ContainerDied","Data":"e1480ba9c9fea28f35d6c52c8127ce2ec414e5e88f10623b5a1d2ac892372fb9"} Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.389196 4957 scope.go:117] "RemoveContainer" containerID="39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.417804 4957 scope.go:117] "RemoveContainer" containerID="dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.432070 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjc2l"] Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.447497 4957 scope.go:117] "RemoveContainer" containerID="fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.456565 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qjc2l"] Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.509573 4957 scope.go:117] "RemoveContainer" containerID="39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a" Nov 28 21:35:14 crc kubenswrapper[4957]: E1128 21:35:14.509988 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a\": container with ID starting with 39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a not found: ID does not exist" containerID="39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.510053 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a"} err="failed to get container status \"39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a\": rpc error: code = NotFound desc = could not find container \"39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a\": container with ID starting with 39363867afd2cbb676874fd4240565cdd37187ad15ef9e1d37ec152f17d02b2a not found: ID does not exist" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.510075 4957 scope.go:117] "RemoveContainer" containerID="dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950" Nov 28 21:35:14 crc kubenswrapper[4957]: E1128 21:35:14.510256 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950\": container with ID starting with dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950 not found: ID does not exist" containerID="dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.510275 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950"} err="failed to get container status \"dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950\": rpc error: code = NotFound desc = could not find container \"dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950\": container with ID starting with dad988a9cc15b3bd8ebd0c9a0f6ea05b1779fde8d8e7285977c9bd3a9e9d4950 not found: ID does not exist" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.510288 4957 scope.go:117] "RemoveContainer" containerID="fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8" Nov 28 21:35:14 crc kubenswrapper[4957]: E1128 21:35:14.510499 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8\": container with ID starting with fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8 not found: ID does not exist" containerID="fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.510521 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8"} err="failed to get container status \"fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8\": rpc error: code = NotFound desc = could not find container \"fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8\": container with ID starting with fbb0728c9934649b3bd82d653db6a186aba7fb157a47c049f411af903d9891e8 not found: ID does not exist" Nov 28 21:35:14 crc kubenswrapper[4957]: I1128 21:35:14.830312 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536c4117-26cb-45b6-b085-52e58a71358f" path="/var/lib/kubelet/pods/536c4117-26cb-45b6-b085-52e58a71358f/volumes" Nov 28 21:36:44 crc kubenswrapper[4957]: I1128 21:36:44.377106 4957 generic.go:334] "Generic (PLEG): container finished" podID="d17490c8-1d2b-43d6-aefe-bcbc181d72aa" containerID="8f05251790fd1c5ab2ce31868ccb48c0a038e32d8d3da5401d2326472a2cd26f" exitCode=0 Nov 28 21:36:44 crc kubenswrapper[4957]: I1128 21:36:44.377177 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" event={"ID":"d17490c8-1d2b-43d6-aefe-bcbc181d72aa","Type":"ContainerDied","Data":"8f05251790fd1c5ab2ce31868ccb48c0a038e32d8d3da5401d2326472a2cd26f"} Nov 28 21:36:45 crc kubenswrapper[4957]: I1128 21:36:45.911663 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.072798 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-0\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.072911 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-1\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.072990 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-combined-ca-bundle\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.073032 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-extra-config-0\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.073081 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-0\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.073111 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-1\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.073155 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwvpt\" (UniqueName: \"kubernetes.io/projected/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-kube-api-access-qwvpt\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.073338 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-inventory\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.073413 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-ssh-key\") pod \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\" (UID: \"d17490c8-1d2b-43d6-aefe-bcbc181d72aa\") " Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.080092 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.084298 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-kube-api-access-qwvpt" (OuterVolumeSpecName: "kube-api-access-qwvpt") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "kube-api-access-qwvpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.107825 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.111148 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.115394 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-inventory" (OuterVolumeSpecName: "inventory") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.128102 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.134365 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.142594 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.152564 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d17490c8-1d2b-43d6-aefe-bcbc181d72aa" (UID: "d17490c8-1d2b-43d6-aefe-bcbc181d72aa"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175641 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175674 4957 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175686 4957 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175695 4957 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175705 4957 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175714 4957 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175723 4957 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175731 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwvpt\" (UniqueName: \"kubernetes.io/projected/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-kube-api-access-qwvpt\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.175741 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d17490c8-1d2b-43d6-aefe-bcbc181d72aa-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.408956 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" event={"ID":"d17490c8-1d2b-43d6-aefe-bcbc181d72aa","Type":"ContainerDied","Data":"33377c43a30693040555cc09d7004bdaf565d6352b2c23f5ea86d6d675d144ae"} Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.408994 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33377c43a30693040555cc09d7004bdaf565d6352b2c23f5ea86d6d675d144ae" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.409018 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rdwmr" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.510331 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7"] Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.510956 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="extract-content" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.510985 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="extract-content" Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.511011 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="extract-utilities" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511021 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="extract-utilities" Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.511049 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="registry-server" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511059 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="registry-server" Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.511082 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="registry-server" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511090 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="registry-server" Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.511103 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="extract-utilities" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511112 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="extract-utilities" Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.511134 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="extract-content" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511143 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="extract-content" Nov 28 21:36:46 crc kubenswrapper[4957]: E1128 21:36:46.511168 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17490c8-1d2b-43d6-aefe-bcbc181d72aa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511177 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17490c8-1d2b-43d6-aefe-bcbc181d72aa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511479 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f9cc68-96c5-4010-86a7-0026b8dea173" containerName="registry-server" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511524 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c4117-26cb-45b6-b085-52e58a71358f" containerName="registry-server" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.511546 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17490c8-1d2b-43d6-aefe-bcbc181d72aa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.512748 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.516007 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.516482 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.516567 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.516573 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.516487 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.543489 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7"] Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.585018 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.585269 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.585434 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.585539 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqw8j\" (UniqueName: \"kubernetes.io/projected/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-kube-api-access-gqw8j\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.585828 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.586020 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.586327 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688520 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqw8j\" (UniqueName: \"kubernetes.io/projected/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-kube-api-access-gqw8j\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688617 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688648 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688719 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688817 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688871 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.688927 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.692958 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.696939 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.697301 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.697459 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.697851 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.697909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.708811 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqw8j\" (UniqueName: \"kubernetes.io/projected/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-kube-api-access-gqw8j\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:46 crc kubenswrapper[4957]: I1128 21:36:46.839192 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:36:47 crc kubenswrapper[4957]: I1128 21:36:47.430547 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7"] Nov 28 21:36:47 crc kubenswrapper[4957]: I1128 21:36:47.434121 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:36:48 crc kubenswrapper[4957]: I1128 21:36:48.452469 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" event={"ID":"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d","Type":"ContainerStarted","Data":"05c8531ca9a18b57bde7a477e73ca9a14bc882091bce98b08b34a33b6fc1da5f"} Nov 28 21:36:48 crc kubenswrapper[4957]: I1128 21:36:48.453086 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" event={"ID":"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d","Type":"ContainerStarted","Data":"8f734c7a315009a380f8606b46b9acfa159504e15989a4b8ad19c5b926fb1e7d"} Nov 28 21:36:48 crc kubenswrapper[4957]: I1128 21:36:48.477268 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" podStartSLOduration=1.9788865310000001 podStartE2EDuration="2.477249879s" podCreationTimestamp="2025-11-28 21:36:46 +0000 UTC" firstStartedPulling="2025-11-28 21:36:47.433879783 +0000 UTC m=+2846.902527692" lastFinishedPulling="2025-11-28 21:36:47.932243111 +0000 UTC m=+2847.400891040" observedRunningTime="2025-11-28 21:36:48.473751133 +0000 UTC m=+2847.942399042" watchObservedRunningTime="2025-11-28 21:36:48.477249879 +0000 UTC m=+2847.945897788" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.115737 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqrb8"] Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.120108 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.133682 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqrb8"] Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.210514 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbvx\" (UniqueName: \"kubernetes.io/projected/3660ab47-f908-46c8-b234-30110c728a62-kube-api-access-tgbvx\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.210586 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-catalog-content\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.211118 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-utilities\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.313546 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbvx\" (UniqueName: \"kubernetes.io/projected/3660ab47-f908-46c8-b234-30110c728a62-kube-api-access-tgbvx\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.313614 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-catalog-content\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.313757 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-utilities\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.314200 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-catalog-content\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.314261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-utilities\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.337272 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbvx\" (UniqueName: \"kubernetes.io/projected/3660ab47-f908-46c8-b234-30110c728a62-kube-api-access-tgbvx\") pod \"redhat-marketplace-hqrb8\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.449860 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:00 crc kubenswrapper[4957]: I1128 21:37:00.957498 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqrb8"] Nov 28 21:37:01 crc kubenswrapper[4957]: I1128 21:37:01.587738 4957 generic.go:334] "Generic (PLEG): container finished" podID="3660ab47-f908-46c8-b234-30110c728a62" containerID="e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa" exitCode=0 Nov 28 21:37:01 crc kubenswrapper[4957]: I1128 21:37:01.587801 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerDied","Data":"e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa"} Nov 28 21:37:01 crc kubenswrapper[4957]: I1128 21:37:01.589066 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerStarted","Data":"c492b929788f9b89a70325c35cdc7caa6c7a597c9c694e64ec735f2cbfb58c1d"} Nov 28 21:37:02 crc kubenswrapper[4957]: I1128 21:37:02.606226 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerStarted","Data":"6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c"} Nov 28 21:37:03 crc kubenswrapper[4957]: I1128 21:37:03.619886 4957 generic.go:334] "Generic (PLEG): container finished" podID="3660ab47-f908-46c8-b234-30110c728a62" containerID="6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c" exitCode=0 Nov 28 21:37:03 crc kubenswrapper[4957]: I1128 21:37:03.619992 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerDied","Data":"6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c"} Nov 28 21:37:04 crc kubenswrapper[4957]: I1128 21:37:04.633500 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerStarted","Data":"bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944"} Nov 28 21:37:04 crc kubenswrapper[4957]: I1128 21:37:04.666789 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqrb8" podStartSLOduration=2.140270639 podStartE2EDuration="4.666765161s" podCreationTimestamp="2025-11-28 21:37:00 +0000 UTC" firstStartedPulling="2025-11-28 21:37:01.590408803 +0000 UTC m=+2861.059056712" lastFinishedPulling="2025-11-28 21:37:04.116903325 +0000 UTC m=+2863.585551234" observedRunningTime="2025-11-28 21:37:04.655006114 +0000 UTC m=+2864.123654023" watchObservedRunningTime="2025-11-28 21:37:04.666765161 +0000 UTC m=+2864.135413070" Nov 28 21:37:08 crc kubenswrapper[4957]: I1128 21:37:08.992119 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:37:08 crc kubenswrapper[4957]: I1128 21:37:08.992717 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:37:10 crc kubenswrapper[4957]: I1128 21:37:10.451091 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:10 crc kubenswrapper[4957]: I1128 21:37:10.451485 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:10 crc kubenswrapper[4957]: I1128 21:37:10.499357 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:10 crc kubenswrapper[4957]: I1128 21:37:10.742378 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:10 crc kubenswrapper[4957]: I1128 21:37:10.788736 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqrb8"] Nov 28 21:37:12 crc kubenswrapper[4957]: I1128 21:37:12.712712 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqrb8" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="registry-server" containerID="cri-o://bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944" gracePeriod=2 Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.245063 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.378988 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgbvx\" (UniqueName: \"kubernetes.io/projected/3660ab47-f908-46c8-b234-30110c728a62-kube-api-access-tgbvx\") pod \"3660ab47-f908-46c8-b234-30110c728a62\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.379238 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-catalog-content\") pod \"3660ab47-f908-46c8-b234-30110c728a62\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.379664 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-utilities\") pod \"3660ab47-f908-46c8-b234-30110c728a62\" (UID: \"3660ab47-f908-46c8-b234-30110c728a62\") " Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.386833 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-utilities" (OuterVolumeSpecName: "utilities") pod "3660ab47-f908-46c8-b234-30110c728a62" (UID: "3660ab47-f908-46c8-b234-30110c728a62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.402498 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3660ab47-f908-46c8-b234-30110c728a62-kube-api-access-tgbvx" (OuterVolumeSpecName: "kube-api-access-tgbvx") pod "3660ab47-f908-46c8-b234-30110c728a62" (UID: "3660ab47-f908-46c8-b234-30110c728a62"). InnerVolumeSpecName "kube-api-access-tgbvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.413868 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3660ab47-f908-46c8-b234-30110c728a62" (UID: "3660ab47-f908-46c8-b234-30110c728a62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.482408 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgbvx\" (UniqueName: \"kubernetes.io/projected/3660ab47-f908-46c8-b234-30110c728a62-kube-api-access-tgbvx\") on node \"crc\" DevicePath \"\"" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.482446 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.482456 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3660ab47-f908-46c8-b234-30110c728a62-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.724800 4957 generic.go:334] "Generic (PLEG): container finished" podID="3660ab47-f908-46c8-b234-30110c728a62" containerID="bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944" exitCode=0 Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.724850 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerDied","Data":"bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944"} Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.724912 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqrb8" event={"ID":"3660ab47-f908-46c8-b234-30110c728a62","Type":"ContainerDied","Data":"c492b929788f9b89a70325c35cdc7caa6c7a597c9c694e64ec735f2cbfb58c1d"} Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.724936 4957 scope.go:117] "RemoveContainer" containerID="bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.724956 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqrb8" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.752525 4957 scope.go:117] "RemoveContainer" containerID="6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.762591 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqrb8"] Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.772956 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqrb8"] Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.790882 4957 scope.go:117] "RemoveContainer" containerID="e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.844134 4957 scope.go:117] "RemoveContainer" containerID="bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944" Nov 28 21:37:13 crc kubenswrapper[4957]: E1128 21:37:13.844560 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944\": container with ID starting with bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944 not found: ID does not exist" containerID="bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.844594 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944"} err="failed to get container status \"bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944\": rpc error: code = NotFound desc = could not find container \"bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944\": container with ID starting with bb677b7217c6c5830de83e3967546e23394c0dc5a9d1b5333d2614580fb77944 not found: ID does not exist" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.844613 4957 scope.go:117] "RemoveContainer" containerID="6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c" Nov 28 21:37:13 crc kubenswrapper[4957]: E1128 21:37:13.844905 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c\": container with ID starting with 6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c not found: ID does not exist" containerID="6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.844951 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c"} err="failed to get container status \"6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c\": rpc error: code = NotFound desc = could not find container \"6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c\": container with ID starting with 6e73cbf61e0595a714fe3a33ce060ce58fb22141b16fedb59a5a6fcf5feff87c not found: ID does not exist" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.844983 4957 scope.go:117] "RemoveContainer" containerID="e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa" Nov 28 21:37:13 crc kubenswrapper[4957]: E1128 21:37:13.845311 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa\": container with ID starting with e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa not found: ID does not exist" containerID="e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa" Nov 28 21:37:13 crc kubenswrapper[4957]: I1128 21:37:13.845344 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa"} err="failed to get container status \"e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa\": rpc error: code = NotFound desc = could not find container \"e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa\": container with ID starting with e1847d3b3dfc9c9b79d295d440482258fbb83ada4c6eaa654b5abda900b8fcfa not found: ID does not exist" Nov 28 21:37:14 crc kubenswrapper[4957]: I1128 21:37:14.831344 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3660ab47-f908-46c8-b234-30110c728a62" path="/var/lib/kubelet/pods/3660ab47-f908-46c8-b234-30110c728a62/volumes" Nov 28 21:37:38 crc kubenswrapper[4957]: I1128 21:37:38.992145 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:37:38 crc kubenswrapper[4957]: I1128 21:37:38.992850 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:38:08 crc kubenswrapper[4957]: I1128 21:38:08.992334 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:38:08 crc kubenswrapper[4957]: I1128 21:38:08.992855 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:38:08 crc kubenswrapper[4957]: I1128 21:38:08.992904 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:38:08 crc kubenswrapper[4957]: I1128 21:38:08.993789 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b857508c825d740f7d13d228d4d1d69689441e28a74437c10b1674960e0710d3"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:38:08 crc kubenswrapper[4957]: I1128 21:38:08.993834 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://b857508c825d740f7d13d228d4d1d69689441e28a74437c10b1674960e0710d3" gracePeriod=600 Nov 28 21:38:09 crc kubenswrapper[4957]: I1128 21:38:09.385087 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="b857508c825d740f7d13d228d4d1d69689441e28a74437c10b1674960e0710d3" exitCode=0 Nov 28 21:38:09 crc kubenswrapper[4957]: I1128 21:38:09.385255 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"b857508c825d740f7d13d228d4d1d69689441e28a74437c10b1674960e0710d3"} Nov 28 21:38:09 crc kubenswrapper[4957]: I1128 21:38:09.385694 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c"} Nov 28 21:38:09 crc kubenswrapper[4957]: I1128 21:38:09.385717 4957 scope.go:117] "RemoveContainer" containerID="50888d943610ed6b4ea3b139b5806f93225db2f21b613409bd365ac055c0026d" Nov 28 21:39:22 crc kubenswrapper[4957]: I1128 21:39:22.248136 4957 generic.go:334] "Generic (PLEG): container finished" podID="dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" containerID="05c8531ca9a18b57bde7a477e73ca9a14bc882091bce98b08b34a33b6fc1da5f" exitCode=0 Nov 28 21:39:22 crc kubenswrapper[4957]: I1128 21:39:22.248237 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" event={"ID":"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d","Type":"ContainerDied","Data":"05c8531ca9a18b57bde7a477e73ca9a14bc882091bce98b08b34a33b6fc1da5f"} Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.665920 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.733970 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqw8j\" (UniqueName: \"kubernetes.io/projected/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-kube-api-access-gqw8j\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.734013 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-0\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.734069 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-2\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.734137 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-1\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.734203 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-telemetry-combined-ca-bundle\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.734298 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ssh-key\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.734320 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-inventory\") pod \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\" (UID: \"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d\") " Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.743376 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-kube-api-access-gqw8j" (OuterVolumeSpecName: "kube-api-access-gqw8j") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "kube-api-access-gqw8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.743378 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.765225 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.774341 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.776057 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.776833 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-inventory" (OuterVolumeSpecName: "inventory") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.783868 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" (UID: "dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837449 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837494 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837513 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837526 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837539 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqw8j\" (UniqueName: \"kubernetes.io/projected/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-kube-api-access-gqw8j\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837551 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:23 crc kubenswrapper[4957]: I1128 21:39:23.837564 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.279038 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" event={"ID":"dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d","Type":"ContainerDied","Data":"8f734c7a315009a380f8606b46b9acfa159504e15989a4b8ad19c5b926fb1e7d"} Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.279109 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f734c7a315009a380f8606b46b9acfa159504e15989a4b8ad19c5b926fb1e7d" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.279112 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.414353 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd"] Nov 28 21:39:24 crc kubenswrapper[4957]: E1128 21:39:24.414778 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.414797 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 21:39:24 crc kubenswrapper[4957]: E1128 21:39:24.414832 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="registry-server" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.414839 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="registry-server" Nov 28 21:39:24 crc kubenswrapper[4957]: E1128 21:39:24.414857 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="extract-content" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.414863 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="extract-content" Nov 28 21:39:24 crc kubenswrapper[4957]: E1128 21:39:24.414883 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="extract-utilities" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.414889 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="extract-utilities" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.415100 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3660ab47-f908-46c8-b234-30110c728a62" containerName="registry-server" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.415127 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.416033 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.418343 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.418842 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.418878 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.419884 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.424280 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.438063 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd"] Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.451824 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pl69\" (UniqueName: \"kubernetes.io/projected/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-kube-api-access-9pl69\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.452076 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.452678 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.452798 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.452929 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.453060 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.453226 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555218 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555552 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pl69\" (UniqueName: \"kubernetes.io/projected/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-kube-api-access-9pl69\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555622 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555668 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555709 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.555777 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.559906 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.560306 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.560779 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.561091 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.561241 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.563088 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.573950 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pl69\" (UniqueName: \"kubernetes.io/projected/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-kube-api-access-9pl69\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:24 crc kubenswrapper[4957]: I1128 21:39:24.734496 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:39:25 crc kubenswrapper[4957]: I1128 21:39:25.268064 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd"] Nov 28 21:39:25 crc kubenswrapper[4957]: I1128 21:39:25.293113 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" event={"ID":"52615b47-f32d-4e3a-a0a0-dc23c7bc7677","Type":"ContainerStarted","Data":"b683211a9a19f428de149ee87fb8233e5989fd1b04e8655a1cb051750ff1f128"} Nov 28 21:39:26 crc kubenswrapper[4957]: I1128 21:39:26.314117 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" event={"ID":"52615b47-f32d-4e3a-a0a0-dc23c7bc7677","Type":"ContainerStarted","Data":"e2e0f402e7825a78f3e5cc9ca09e582d1b8a9e7856c17d44bfb9e4971112be50"} Nov 28 21:40:38 crc kubenswrapper[4957]: I1128 21:40:38.992448 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:40:38 crc kubenswrapper[4957]: I1128 21:40:38.993340 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:41:08 crc kubenswrapper[4957]: I1128 21:41:08.992253 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:41:08 crc kubenswrapper[4957]: I1128 21:41:08.992820 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:41:38 crc kubenswrapper[4957]: I1128 21:41:38.992812 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:41:38 crc kubenswrapper[4957]: I1128 21:41:38.993331 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:41:38 crc kubenswrapper[4957]: I1128 21:41:38.993385 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:41:38 crc kubenswrapper[4957]: I1128 21:41:38.994271 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:41:38 crc kubenswrapper[4957]: I1128 21:41:38.994327 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" gracePeriod=600 Nov 28 21:41:39 crc kubenswrapper[4957]: E1128 21:41:39.114473 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d41c2ca_d1ca_46b0_be19_6e4693f0b827.slice/crio-acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d41c2ca_d1ca_46b0_be19_6e4693f0b827.slice/crio-conmon-acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c.scope\": RecentStats: unable to find data in memory cache]" Nov 28 21:41:39 crc kubenswrapper[4957]: E1128 21:41:39.116932 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:41:39 crc kubenswrapper[4957]: I1128 21:41:39.815630 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" exitCode=0 Nov 28 21:41:39 crc kubenswrapper[4957]: I1128 21:41:39.815664 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c"} Nov 28 21:41:39 crc kubenswrapper[4957]: I1128 21:41:39.816282 4957 scope.go:117] "RemoveContainer" containerID="b857508c825d740f7d13d228d4d1d69689441e28a74437c10b1674960e0710d3" Nov 28 21:41:39 crc kubenswrapper[4957]: I1128 21:41:39.816980 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:41:39 crc kubenswrapper[4957]: E1128 21:41:39.817372 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:41:39 crc kubenswrapper[4957]: I1128 21:41:39.838179 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" podStartSLOduration=135.274970385 podStartE2EDuration="2m15.838157718s" podCreationTimestamp="2025-11-28 21:39:24 +0000 UTC" firstStartedPulling="2025-11-28 21:39:25.274948845 +0000 UTC m=+3004.743596754" lastFinishedPulling="2025-11-28 21:39:25.838136158 +0000 UTC m=+3005.306784087" observedRunningTime="2025-11-28 21:39:26.33316992 +0000 UTC m=+3005.801817829" watchObservedRunningTime="2025-11-28 21:41:39.838157718 +0000 UTC m=+3139.306805627" Nov 28 21:41:40 crc kubenswrapper[4957]: I1128 21:41:40.826849 4957 generic.go:334] "Generic (PLEG): container finished" podID="52615b47-f32d-4e3a-a0a0-dc23c7bc7677" containerID="e2e0f402e7825a78f3e5cc9ca09e582d1b8a9e7856c17d44bfb9e4971112be50" exitCode=0 Nov 28 21:41:40 crc kubenswrapper[4957]: I1128 21:41:40.826881 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" event={"ID":"52615b47-f32d-4e3a-a0a0-dc23c7bc7677","Type":"ContainerDied","Data":"e2e0f402e7825a78f3e5cc9ca09e582d1b8a9e7856c17d44bfb9e4971112be50"} Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.303474 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.389321 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-1\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.389412 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pl69\" (UniqueName: \"kubernetes.io/projected/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-kube-api-access-9pl69\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.390235 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-2\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.390311 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-inventory\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.390365 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ssh-key\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.390381 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-0\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.390438 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-telemetry-power-monitoring-combined-ca-bundle\") pod \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\" (UID: \"52615b47-f32d-4e3a-a0a0-dc23c7bc7677\") " Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.400776 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.400814 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-kube-api-access-9pl69" (OuterVolumeSpecName: "kube-api-access-9pl69") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "kube-api-access-9pl69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.425695 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.427562 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.436358 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.442952 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.447683 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-inventory" (OuterVolumeSpecName: "inventory") pod "52615b47-f32d-4e3a-a0a0-dc23c7bc7677" (UID: "52615b47-f32d-4e3a-a0a0-dc23c7bc7677"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492676 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492708 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492720 4957 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492730 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492740 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pl69\" (UniqueName: \"kubernetes.io/projected/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-kube-api-access-9pl69\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492748 4957 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.492757 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/52615b47-f32d-4e3a-a0a0-dc23c7bc7677-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.849840 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" event={"ID":"52615b47-f32d-4e3a-a0a0-dc23c7bc7677","Type":"ContainerDied","Data":"b683211a9a19f428de149ee87fb8233e5989fd1b04e8655a1cb051750ff1f128"} Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.850106 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b683211a9a19f428de149ee87fb8233e5989fd1b04e8655a1cb051750ff1f128" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.849923 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.931406 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h"] Nov 28 21:41:42 crc kubenswrapper[4957]: E1128 21:41:42.932270 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52615b47-f32d-4e3a-a0a0-dc23c7bc7677" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.932317 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="52615b47-f32d-4e3a-a0a0-dc23c7bc7677" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.932715 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="52615b47-f32d-4e3a-a0a0-dc23c7bc7677" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.933832 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.937135 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.939392 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.941140 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.941324 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.941526 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gsx4l" Nov 28 21:41:42 crc kubenswrapper[4957]: I1128 21:41:42.945338 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h"] Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.003991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.004131 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.004246 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.004278 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztljr\" (UniqueName: \"kubernetes.io/projected/80b23341-10eb-4c68-aba7-e36583140466-kube-api-access-ztljr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.004365 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.106588 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.106735 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.107430 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.107508 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztljr\" (UniqueName: \"kubernetes.io/projected/80b23341-10eb-4c68-aba7-e36583140466-kube-api-access-ztljr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.107601 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.111027 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.111032 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.111553 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.112771 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.125026 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztljr\" (UniqueName: \"kubernetes.io/projected/80b23341-10eb-4c68-aba7-e36583140466-kube-api-access-ztljr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-js78h\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.258936 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.789177 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h"] Nov 28 21:41:43 crc kubenswrapper[4957]: I1128 21:41:43.874546 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" event={"ID":"80b23341-10eb-4c68-aba7-e36583140466","Type":"ContainerStarted","Data":"4471a03cc56446af8f6288c3c0ac03b2ac2c8250cc828a89476557c242eb69f9"} Nov 28 21:41:44 crc kubenswrapper[4957]: I1128 21:41:44.892445 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" event={"ID":"80b23341-10eb-4c68-aba7-e36583140466","Type":"ContainerStarted","Data":"598f2b7cffc6e163adeab39e85c7129e2acaa68a8416cfe92226c1d68e159967"} Nov 28 21:41:44 crc kubenswrapper[4957]: I1128 21:41:44.912472 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" podStartSLOduration=2.450824797 podStartE2EDuration="2.912454115s" podCreationTimestamp="2025-11-28 21:41:42 +0000 UTC" firstStartedPulling="2025-11-28 21:41:43.79059236 +0000 UTC m=+3143.259240269" lastFinishedPulling="2025-11-28 21:41:44.252221678 +0000 UTC m=+3143.720869587" observedRunningTime="2025-11-28 21:41:44.910813905 +0000 UTC m=+3144.379461834" watchObservedRunningTime="2025-11-28 21:41:44.912454115 +0000 UTC m=+3144.381102024" Nov 28 21:41:51 crc kubenswrapper[4957]: I1128 21:41:51.813334 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:41:51 crc kubenswrapper[4957]: E1128 21:41:51.815112 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:42:01 crc kubenswrapper[4957]: I1128 21:42:01.070910 4957 generic.go:334] "Generic (PLEG): container finished" podID="80b23341-10eb-4c68-aba7-e36583140466" containerID="598f2b7cffc6e163adeab39e85c7129e2acaa68a8416cfe92226c1d68e159967" exitCode=0 Nov 28 21:42:01 crc kubenswrapper[4957]: I1128 21:42:01.070974 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" event={"ID":"80b23341-10eb-4c68-aba7-e36583140466","Type":"ContainerDied","Data":"598f2b7cffc6e163adeab39e85c7129e2acaa68a8416cfe92226c1d68e159967"} Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.627614 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.663764 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-inventory\") pod \"80b23341-10eb-4c68-aba7-e36583140466\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.663987 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-1\") pod \"80b23341-10eb-4c68-aba7-e36583140466\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.664030 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-ssh-key\") pod \"80b23341-10eb-4c68-aba7-e36583140466\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.664102 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-0\") pod \"80b23341-10eb-4c68-aba7-e36583140466\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.664320 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztljr\" (UniqueName: \"kubernetes.io/projected/80b23341-10eb-4c68-aba7-e36583140466-kube-api-access-ztljr\") pod \"80b23341-10eb-4c68-aba7-e36583140466\" (UID: \"80b23341-10eb-4c68-aba7-e36583140466\") " Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.669495 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b23341-10eb-4c68-aba7-e36583140466-kube-api-access-ztljr" (OuterVolumeSpecName: "kube-api-access-ztljr") pod "80b23341-10eb-4c68-aba7-e36583140466" (UID: "80b23341-10eb-4c68-aba7-e36583140466"). InnerVolumeSpecName "kube-api-access-ztljr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.696827 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-inventory" (OuterVolumeSpecName: "inventory") pod "80b23341-10eb-4c68-aba7-e36583140466" (UID: "80b23341-10eb-4c68-aba7-e36583140466"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.700423 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "80b23341-10eb-4c68-aba7-e36583140466" (UID: "80b23341-10eb-4c68-aba7-e36583140466"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.705198 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "80b23341-10eb-4c68-aba7-e36583140466" (UID: "80b23341-10eb-4c68-aba7-e36583140466"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.721300 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "80b23341-10eb-4c68-aba7-e36583140466" (UID: "80b23341-10eb-4c68-aba7-e36583140466"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.767201 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztljr\" (UniqueName: \"kubernetes.io/projected/80b23341-10eb-4c68-aba7-e36583140466-kube-api-access-ztljr\") on node \"crc\" DevicePath \"\"" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.767261 4957 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.767275 4957 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.767286 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 21:42:02 crc kubenswrapper[4957]: I1128 21:42:02.767298 4957 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80b23341-10eb-4c68-aba7-e36583140466-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 21:42:03 crc kubenswrapper[4957]: I1128 21:42:03.097428 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" event={"ID":"80b23341-10eb-4c68-aba7-e36583140466","Type":"ContainerDied","Data":"4471a03cc56446af8f6288c3c0ac03b2ac2c8250cc828a89476557c242eb69f9"} Nov 28 21:42:03 crc kubenswrapper[4957]: I1128 21:42:03.097499 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4471a03cc56446af8f6288c3c0ac03b2ac2c8250cc828a89476557c242eb69f9" Nov 28 21:42:03 crc kubenswrapper[4957]: I1128 21:42:03.097580 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-js78h" Nov 28 21:42:04 crc kubenswrapper[4957]: I1128 21:42:04.813303 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:42:04 crc kubenswrapper[4957]: E1128 21:42:04.813921 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:42:16 crc kubenswrapper[4957]: I1128 21:42:16.814011 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:42:16 crc kubenswrapper[4957]: E1128 21:42:16.814748 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:42:28 crc kubenswrapper[4957]: I1128 21:42:28.814588 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:42:28 crc kubenswrapper[4957]: E1128 21:42:28.815358 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:42:43 crc kubenswrapper[4957]: I1128 21:42:43.815006 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:42:43 crc kubenswrapper[4957]: E1128 21:42:43.815789 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:42:54 crc kubenswrapper[4957]: I1128 21:42:54.813934 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:42:54 crc kubenswrapper[4957]: E1128 21:42:54.814743 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:43:09 crc kubenswrapper[4957]: I1128 21:43:09.814058 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:43:09 crc kubenswrapper[4957]: E1128 21:43:09.814996 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:43:23 crc kubenswrapper[4957]: I1128 21:43:23.813268 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:43:23 crc kubenswrapper[4957]: E1128 21:43:23.814054 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:43:38 crc kubenswrapper[4957]: I1128 21:43:38.814882 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:43:38 crc kubenswrapper[4957]: E1128 21:43:38.815771 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.825405 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7rk9k"] Nov 28 21:43:48 crc kubenswrapper[4957]: E1128 21:43:48.826486 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b23341-10eb-4c68-aba7-e36583140466" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.826503 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b23341-10eb-4c68-aba7-e36583140466" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.826704 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b23341-10eb-4c68-aba7-e36583140466" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.828373 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.840326 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rk9k"] Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.984189 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mzg\" (UniqueName: \"kubernetes.io/projected/d809055c-c0a1-42b2-8f83-e3a38b22269c-kube-api-access-h2mzg\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.984784 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-utilities\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:48 crc kubenswrapper[4957]: I1128 21:43:48.984814 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-catalog-content\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.087043 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-utilities\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.087094 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-catalog-content\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.087203 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mzg\" (UniqueName: \"kubernetes.io/projected/d809055c-c0a1-42b2-8f83-e3a38b22269c-kube-api-access-h2mzg\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.087615 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-utilities\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.087865 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-catalog-content\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.111329 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mzg\" (UniqueName: \"kubernetes.io/projected/d809055c-c0a1-42b2-8f83-e3a38b22269c-kube-api-access-h2mzg\") pod \"redhat-operators-7rk9k\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.160018 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:49 crc kubenswrapper[4957]: I1128 21:43:49.654169 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rk9k"] Nov 28 21:43:50 crc kubenswrapper[4957]: I1128 21:43:50.299103 4957 generic.go:334] "Generic (PLEG): container finished" podID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerID="57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f" exitCode=0 Nov 28 21:43:50 crc kubenswrapper[4957]: I1128 21:43:50.299181 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerDied","Data":"57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f"} Nov 28 21:43:50 crc kubenswrapper[4957]: I1128 21:43:50.299434 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerStarted","Data":"b26b90985351396d4e48939f6171b935007c8d752d20c0c81d01cc36280d9436"} Nov 28 21:43:50 crc kubenswrapper[4957]: I1128 21:43:50.301504 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:43:51 crc kubenswrapper[4957]: I1128 21:43:51.311857 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerStarted","Data":"60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c"} Nov 28 21:43:51 crc kubenswrapper[4957]: I1128 21:43:51.813156 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:43:51 crc kubenswrapper[4957]: E1128 21:43:51.813434 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:43:54 crc kubenswrapper[4957]: I1128 21:43:54.344353 4957 generic.go:334] "Generic (PLEG): container finished" podID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerID="60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c" exitCode=0 Nov 28 21:43:54 crc kubenswrapper[4957]: I1128 21:43:54.344475 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerDied","Data":"60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c"} Nov 28 21:43:55 crc kubenswrapper[4957]: I1128 21:43:55.357641 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerStarted","Data":"f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc"} Nov 28 21:43:55 crc kubenswrapper[4957]: I1128 21:43:55.391334 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7rk9k" podStartSLOduration=2.833698041 podStartE2EDuration="7.391308592s" podCreationTimestamp="2025-11-28 21:43:48 +0000 UTC" firstStartedPulling="2025-11-28 21:43:50.301160774 +0000 UTC m=+3269.769808703" lastFinishedPulling="2025-11-28 21:43:54.858771335 +0000 UTC m=+3274.327419254" observedRunningTime="2025-11-28 21:43:55.377810779 +0000 UTC m=+3274.846458688" watchObservedRunningTime="2025-11-28 21:43:55.391308592 +0000 UTC m=+3274.859956511" Nov 28 21:43:59 crc kubenswrapper[4957]: I1128 21:43:59.160826 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:43:59 crc kubenswrapper[4957]: I1128 21:43:59.161424 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:44:00 crc kubenswrapper[4957]: I1128 21:44:00.221974 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7rk9k" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="registry-server" probeResult="failure" output=< Nov 28 21:44:00 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 21:44:00 crc kubenswrapper[4957]: > Nov 28 21:44:06 crc kubenswrapper[4957]: I1128 21:44:06.813755 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:44:06 crc kubenswrapper[4957]: E1128 21:44:06.814737 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:44:09 crc kubenswrapper[4957]: I1128 21:44:09.206383 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:44:09 crc kubenswrapper[4957]: I1128 21:44:09.265434 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:44:09 crc kubenswrapper[4957]: I1128 21:44:09.446764 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rk9k"] Nov 28 21:44:10 crc kubenswrapper[4957]: I1128 21:44:10.512790 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7rk9k" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="registry-server" containerID="cri-o://f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc" gracePeriod=2 Nov 28 21:44:10 crc kubenswrapper[4957]: I1128 21:44:10.971942 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.119376 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-catalog-content\") pod \"d809055c-c0a1-42b2-8f83-e3a38b22269c\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.119775 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-utilities\") pod \"d809055c-c0a1-42b2-8f83-e3a38b22269c\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.119975 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2mzg\" (UniqueName: \"kubernetes.io/projected/d809055c-c0a1-42b2-8f83-e3a38b22269c-kube-api-access-h2mzg\") pod \"d809055c-c0a1-42b2-8f83-e3a38b22269c\" (UID: \"d809055c-c0a1-42b2-8f83-e3a38b22269c\") " Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.122123 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-utilities" (OuterVolumeSpecName: "utilities") pod "d809055c-c0a1-42b2-8f83-e3a38b22269c" (UID: "d809055c-c0a1-42b2-8f83-e3a38b22269c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.129247 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d809055c-c0a1-42b2-8f83-e3a38b22269c-kube-api-access-h2mzg" (OuterVolumeSpecName: "kube-api-access-h2mzg") pod "d809055c-c0a1-42b2-8f83-e3a38b22269c" (UID: "d809055c-c0a1-42b2-8f83-e3a38b22269c"). InnerVolumeSpecName "kube-api-access-h2mzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.220299 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d809055c-c0a1-42b2-8f83-e3a38b22269c" (UID: "d809055c-c0a1-42b2-8f83-e3a38b22269c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.227948 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2mzg\" (UniqueName: \"kubernetes.io/projected/d809055c-c0a1-42b2-8f83-e3a38b22269c-kube-api-access-h2mzg\") on node \"crc\" DevicePath \"\"" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.227978 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.227992 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d809055c-c0a1-42b2-8f83-e3a38b22269c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.525422 4957 generic.go:334] "Generic (PLEG): container finished" podID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerID="f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc" exitCode=0 Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.525506 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerDied","Data":"f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc"} Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.525533 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rk9k" event={"ID":"d809055c-c0a1-42b2-8f83-e3a38b22269c","Type":"ContainerDied","Data":"b26b90985351396d4e48939f6171b935007c8d752d20c0c81d01cc36280d9436"} Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.525533 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rk9k" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.525549 4957 scope.go:117] "RemoveContainer" containerID="f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.563291 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rk9k"] Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.565867 4957 scope.go:117] "RemoveContainer" containerID="60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.575252 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7rk9k"] Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.589901 4957 scope.go:117] "RemoveContainer" containerID="57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.641091 4957 scope.go:117] "RemoveContainer" containerID="f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc" Nov 28 21:44:11 crc kubenswrapper[4957]: E1128 21:44:11.641618 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc\": container with ID starting with f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc not found: ID does not exist" containerID="f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.641659 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc"} err="failed to get container status \"f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc\": rpc error: code = NotFound desc = could not find container \"f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc\": container with ID starting with f68c8147ae534f3ba5a04bab481c2bf0ff1ee3b1020f5e326b37886cc35316cc not found: ID does not exist" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.641685 4957 scope.go:117] "RemoveContainer" containerID="60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c" Nov 28 21:44:11 crc kubenswrapper[4957]: E1128 21:44:11.642060 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c\": container with ID starting with 60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c not found: ID does not exist" containerID="60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.642092 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c"} err="failed to get container status \"60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c\": rpc error: code = NotFound desc = could not find container \"60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c\": container with ID starting with 60adf032fade4a3086df5b329a9ab27411fd31f03fc4b5f77f5c350944e9b60c not found: ID does not exist" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.642112 4957 scope.go:117] "RemoveContainer" containerID="57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f" Nov 28 21:44:11 crc kubenswrapper[4957]: E1128 21:44:11.642662 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f\": container with ID starting with 57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f not found: ID does not exist" containerID="57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f" Nov 28 21:44:11 crc kubenswrapper[4957]: I1128 21:44:11.642689 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f"} err="failed to get container status \"57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f\": rpc error: code = NotFound desc = could not find container \"57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f\": container with ID starting with 57f4fa85ad76a5dd4a568b9f66fd0d1f15ab7369fb84c8b353501b051bb0149f not found: ID does not exist" Nov 28 21:44:12 crc kubenswrapper[4957]: I1128 21:44:12.827337 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" path="/var/lib/kubelet/pods/d809055c-c0a1-42b2-8f83-e3a38b22269c/volumes" Nov 28 21:44:18 crc kubenswrapper[4957]: I1128 21:44:18.813469 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:44:18 crc kubenswrapper[4957]: E1128 21:44:18.814320 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:44:31 crc kubenswrapper[4957]: I1128 21:44:31.813443 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:44:31 crc kubenswrapper[4957]: E1128 21:44:31.814706 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:44:42 crc kubenswrapper[4957]: I1128 21:44:42.813273 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:44:42 crc kubenswrapper[4957]: E1128 21:44:42.814182 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:44:56 crc kubenswrapper[4957]: I1128 21:44:56.812952 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:44:56 crc kubenswrapper[4957]: E1128 21:44:56.815025 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.144244 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95"] Nov 28 21:45:00 crc kubenswrapper[4957]: E1128 21:45:00.145420 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="extract-utilities" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.145460 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="extract-utilities" Nov 28 21:45:00 crc kubenswrapper[4957]: E1128 21:45:00.145480 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="registry-server" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.145486 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="registry-server" Nov 28 21:45:00 crc kubenswrapper[4957]: E1128 21:45:00.145496 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="extract-content" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.145502 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="extract-content" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.145814 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d809055c-c0a1-42b2-8f83-e3a38b22269c" containerName="registry-server" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.146700 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.150082 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.151317 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.157231 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95"] Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.185187 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-config-volume\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.185278 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ts6\" (UniqueName: \"kubernetes.io/projected/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-kube-api-access-l8ts6\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.185311 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-secret-volume\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.287376 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-config-volume\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.287688 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ts6\" (UniqueName: \"kubernetes.io/projected/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-kube-api-access-l8ts6\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.287719 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-secret-volume\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.289653 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-config-volume\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.294025 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-secret-volume\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.307376 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ts6\" (UniqueName: \"kubernetes.io/projected/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-kube-api-access-l8ts6\") pod \"collect-profiles-29406105-jdh95\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.485097 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:00 crc kubenswrapper[4957]: I1128 21:45:00.941945 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95"] Nov 28 21:45:01 crc kubenswrapper[4957]: I1128 21:45:01.387892 4957 generic.go:334] "Generic (PLEG): container finished" podID="1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" containerID="31f10edb7ce4c58574b8a0ee0b4ef3262431d919f85d3691cc24b76f15900135" exitCode=0 Nov 28 21:45:01 crc kubenswrapper[4957]: I1128 21:45:01.388125 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" event={"ID":"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b","Type":"ContainerDied","Data":"31f10edb7ce4c58574b8a0ee0b4ef3262431d919f85d3691cc24b76f15900135"} Nov 28 21:45:01 crc kubenswrapper[4957]: I1128 21:45:01.388151 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" event={"ID":"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b","Type":"ContainerStarted","Data":"a46e0a0eae02c7f42a500b0a54ab9e42f1221a05da788c897d0fdd32be7d3f21"} Nov 28 21:45:02 crc kubenswrapper[4957]: I1128 21:45:02.958147 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.069984 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-secret-volume\") pod \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.070098 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8ts6\" (UniqueName: \"kubernetes.io/projected/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-kube-api-access-l8ts6\") pod \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.070337 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-config-volume\") pod \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\" (UID: \"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b\") " Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.071408 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-config-volume" (OuterVolumeSpecName: "config-volume") pod "1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" (UID: "1aa2ffd8-a33a-444e-b95b-55fd7bbf349b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.075464 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" (UID: "1aa2ffd8-a33a-444e-b95b-55fd7bbf349b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.082556 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-kube-api-access-l8ts6" (OuterVolumeSpecName: "kube-api-access-l8ts6") pod "1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" (UID: "1aa2ffd8-a33a-444e-b95b-55fd7bbf349b"). InnerVolumeSpecName "kube-api-access-l8ts6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.173071 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.173108 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8ts6\" (UniqueName: \"kubernetes.io/projected/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-kube-api-access-l8ts6\") on node \"crc\" DevicePath \"\"" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.173124 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.412310 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" event={"ID":"1aa2ffd8-a33a-444e-b95b-55fd7bbf349b","Type":"ContainerDied","Data":"a46e0a0eae02c7f42a500b0a54ab9e42f1221a05da788c897d0fdd32be7d3f21"} Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.412356 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46e0a0eae02c7f42a500b0a54ab9e42f1221a05da788c897d0fdd32be7d3f21" Nov 28 21:45:03 crc kubenswrapper[4957]: I1128 21:45:03.412338 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95" Nov 28 21:45:04 crc kubenswrapper[4957]: I1128 21:45:04.088780 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb"] Nov 28 21:45:04 crc kubenswrapper[4957]: I1128 21:45:04.098479 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406060-29bqb"] Nov 28 21:45:04 crc kubenswrapper[4957]: I1128 21:45:04.828632 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f7dc1c-d499-496d-a1d6-f6677e8a6865" path="/var/lib/kubelet/pods/c5f7dc1c-d499-496d-a1d6-f6677e8a6865/volumes" Nov 28 21:45:11 crc kubenswrapper[4957]: I1128 21:45:11.812879 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:45:11 crc kubenswrapper[4957]: E1128 21:45:11.813640 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:45:24 crc kubenswrapper[4957]: I1128 21:45:24.813254 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:45:24 crc kubenswrapper[4957]: E1128 21:45:24.814024 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:45:29 crc kubenswrapper[4957]: I1128 21:45:29.399558 4957 scope.go:117] "RemoveContainer" containerID="5ee24ee3c5793df8be16dd44ff52273446b3134b21b9d36121a5ad5cd7059d59" Nov 28 21:45:35 crc kubenswrapper[4957]: I1128 21:45:35.813626 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:45:35 crc kubenswrapper[4957]: E1128 21:45:35.814640 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.068836 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gwtf2"] Nov 28 21:45:42 crc kubenswrapper[4957]: E1128 21:45:42.070782 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" containerName="collect-profiles" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.070878 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" containerName="collect-profiles" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.071225 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" containerName="collect-profiles" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.073425 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.088867 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwtf2"] Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.133722 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-utilities\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.133921 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-catalog-content\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.134034 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxl26\" (UniqueName: \"kubernetes.io/projected/be00c71a-58e1-4e60-bbc6-4cbf40d314de-kube-api-access-kxl26\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.235894 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-catalog-content\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.236008 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxl26\" (UniqueName: \"kubernetes.io/projected/be00c71a-58e1-4e60-bbc6-4cbf40d314de-kube-api-access-kxl26\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.236100 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-utilities\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.236448 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-catalog-content\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.236590 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-utilities\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.256497 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxl26\" (UniqueName: \"kubernetes.io/projected/be00c71a-58e1-4e60-bbc6-4cbf40d314de-kube-api-access-kxl26\") pod \"community-operators-gwtf2\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.392996 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:42 crc kubenswrapper[4957]: I1128 21:45:42.965570 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwtf2"] Nov 28 21:45:43 crc kubenswrapper[4957]: I1128 21:45:43.813839 4957 generic.go:334] "Generic (PLEG): container finished" podID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerID="5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547" exitCode=0 Nov 28 21:45:43 crc kubenswrapper[4957]: I1128 21:45:43.814386 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerDied","Data":"5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547"} Nov 28 21:45:43 crc kubenswrapper[4957]: I1128 21:45:43.814417 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerStarted","Data":"fd983fe99b83b3ee2c082ef4c9babbb3f3b81d580c8815c0ac99825181248c63"} Nov 28 21:45:44 crc kubenswrapper[4957]: I1128 21:45:44.835435 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerStarted","Data":"897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a"} Nov 28 21:45:45 crc kubenswrapper[4957]: I1128 21:45:45.853796 4957 generic.go:334] "Generic (PLEG): container finished" podID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerID="897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a" exitCode=0 Nov 28 21:45:45 crc kubenswrapper[4957]: I1128 21:45:45.853899 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerDied","Data":"897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a"} Nov 28 21:45:46 crc kubenswrapper[4957]: I1128 21:45:46.871024 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerStarted","Data":"c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998"} Nov 28 21:45:46 crc kubenswrapper[4957]: I1128 21:45:46.897094 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gwtf2" podStartSLOduration=2.484416171 podStartE2EDuration="4.897077709s" podCreationTimestamp="2025-11-28 21:45:42 +0000 UTC" firstStartedPulling="2025-11-28 21:45:43.81995076 +0000 UTC m=+3383.288598669" lastFinishedPulling="2025-11-28 21:45:46.232612258 +0000 UTC m=+3385.701260207" observedRunningTime="2025-11-28 21:45:46.889290867 +0000 UTC m=+3386.357938776" watchObservedRunningTime="2025-11-28 21:45:46.897077709 +0000 UTC m=+3386.365725618" Nov 28 21:45:50 crc kubenswrapper[4957]: I1128 21:45:50.821888 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:45:50 crc kubenswrapper[4957]: E1128 21:45:50.822638 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:45:52 crc kubenswrapper[4957]: I1128 21:45:52.394165 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:52 crc kubenswrapper[4957]: I1128 21:45:52.394486 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:52 crc kubenswrapper[4957]: I1128 21:45:52.448633 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:52 crc kubenswrapper[4957]: I1128 21:45:52.989790 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:53 crc kubenswrapper[4957]: I1128 21:45:53.039865 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwtf2"] Nov 28 21:45:54 crc kubenswrapper[4957]: I1128 21:45:54.989979 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gwtf2" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="registry-server" containerID="cri-o://c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998" gracePeriod=2 Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.526369 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.630946 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxl26\" (UniqueName: \"kubernetes.io/projected/be00c71a-58e1-4e60-bbc6-4cbf40d314de-kube-api-access-kxl26\") pod \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.631145 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-utilities\") pod \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.631179 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-catalog-content\") pod \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\" (UID: \"be00c71a-58e1-4e60-bbc6-4cbf40d314de\") " Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.631932 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-utilities" (OuterVolumeSpecName: "utilities") pod "be00c71a-58e1-4e60-bbc6-4cbf40d314de" (UID: "be00c71a-58e1-4e60-bbc6-4cbf40d314de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.636963 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be00c71a-58e1-4e60-bbc6-4cbf40d314de-kube-api-access-kxl26" (OuterVolumeSpecName: "kube-api-access-kxl26") pod "be00c71a-58e1-4e60-bbc6-4cbf40d314de" (UID: "be00c71a-58e1-4e60-bbc6-4cbf40d314de"). InnerVolumeSpecName "kube-api-access-kxl26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.678093 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be00c71a-58e1-4e60-bbc6-4cbf40d314de" (UID: "be00c71a-58e1-4e60-bbc6-4cbf40d314de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.733553 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxl26\" (UniqueName: \"kubernetes.io/projected/be00c71a-58e1-4e60-bbc6-4cbf40d314de-kube-api-access-kxl26\") on node \"crc\" DevicePath \"\"" Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.733829 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:45:55 crc kubenswrapper[4957]: I1128 21:45:55.733839 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be00c71a-58e1-4e60-bbc6-4cbf40d314de-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.003089 4957 generic.go:334] "Generic (PLEG): container finished" podID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerID="c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998" exitCode=0 Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.003140 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerDied","Data":"c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998"} Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.003167 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwtf2" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.003192 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwtf2" event={"ID":"be00c71a-58e1-4e60-bbc6-4cbf40d314de","Type":"ContainerDied","Data":"fd983fe99b83b3ee2c082ef4c9babbb3f3b81d580c8815c0ac99825181248c63"} Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.003204 4957 scope.go:117] "RemoveContainer" containerID="c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.044497 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwtf2"] Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.047617 4957 scope.go:117] "RemoveContainer" containerID="897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.055491 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gwtf2"] Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.069958 4957 scope.go:117] "RemoveContainer" containerID="5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.131791 4957 scope.go:117] "RemoveContainer" containerID="c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998" Nov 28 21:45:56 crc kubenswrapper[4957]: E1128 21:45:56.132358 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998\": container with ID starting with c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998 not found: ID does not exist" containerID="c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.132410 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998"} err="failed to get container status \"c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998\": rpc error: code = NotFound desc = could not find container \"c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998\": container with ID starting with c91191dd75f1b08b95bf20a32c77e50430487d6a7a52f66ebae401e0759dd998 not found: ID does not exist" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.132454 4957 scope.go:117] "RemoveContainer" containerID="897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a" Nov 28 21:45:56 crc kubenswrapper[4957]: E1128 21:45:56.133071 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a\": container with ID starting with 897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a not found: ID does not exist" containerID="897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.133123 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a"} err="failed to get container status \"897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a\": rpc error: code = NotFound desc = could not find container \"897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a\": container with ID starting with 897485199efc99f39c1782ed6093bccc7acf84a41b3b1079721fec9a889b045a not found: ID does not exist" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.133158 4957 scope.go:117] "RemoveContainer" containerID="5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547" Nov 28 21:45:56 crc kubenswrapper[4957]: E1128 21:45:56.133729 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547\": container with ID starting with 5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547 not found: ID does not exist" containerID="5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.133768 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547"} err="failed to get container status \"5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547\": rpc error: code = NotFound desc = could not find container \"5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547\": container with ID starting with 5cef9f8e04d47963f019ec55971f6030467cd0f8bc4063e81f2f032f942da547 not found: ID does not exist" Nov 28 21:45:56 crc kubenswrapper[4957]: I1128 21:45:56.824569 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" path="/var/lib/kubelet/pods/be00c71a-58e1-4e60-bbc6-4cbf40d314de/volumes" Nov 28 21:46:05 crc kubenswrapper[4957]: I1128 21:46:05.814121 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:46:05 crc kubenswrapper[4957]: E1128 21:46:05.815493 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.483337 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9xl92"] Nov 28 21:46:07 crc kubenswrapper[4957]: E1128 21:46:07.484160 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="extract-utilities" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.484172 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="extract-utilities" Nov 28 21:46:07 crc kubenswrapper[4957]: E1128 21:46:07.484220 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="registry-server" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.484227 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="registry-server" Nov 28 21:46:07 crc kubenswrapper[4957]: E1128 21:46:07.484240 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="extract-content" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.484246 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="extract-content" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.484508 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="be00c71a-58e1-4e60-bbc6-4cbf40d314de" containerName="registry-server" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.490978 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.502790 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xl92"] Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.634969 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjgq\" (UniqueName: \"kubernetes.io/projected/9ccca50e-23f1-4812-ad00-11b912830dd0-kube-api-access-rqjgq\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.635368 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-utilities\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.635566 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-catalog-content\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.737522 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-utilities\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.737692 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-catalog-content\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.737783 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqjgq\" (UniqueName: \"kubernetes.io/projected/9ccca50e-23f1-4812-ad00-11b912830dd0-kube-api-access-rqjgq\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.738136 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-utilities\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.738261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-catalog-content\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.762200 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqjgq\" (UniqueName: \"kubernetes.io/projected/9ccca50e-23f1-4812-ad00-11b912830dd0-kube-api-access-rqjgq\") pod \"certified-operators-9xl92\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:07 crc kubenswrapper[4957]: I1128 21:46:07.837798 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:08 crc kubenswrapper[4957]: I1128 21:46:08.438605 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xl92"] Nov 28 21:46:09 crc kubenswrapper[4957]: I1128 21:46:09.166579 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerID="82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446" exitCode=0 Nov 28 21:46:09 crc kubenswrapper[4957]: I1128 21:46:09.166938 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerDied","Data":"82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446"} Nov 28 21:46:09 crc kubenswrapper[4957]: I1128 21:46:09.166967 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerStarted","Data":"d59360e28405a1a4634947908d1d7a1cde1eae79c118a4252ae34e4a38efe940"} Nov 28 21:46:10 crc kubenswrapper[4957]: I1128 21:46:10.181755 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerStarted","Data":"565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab"} Nov 28 21:46:12 crc kubenswrapper[4957]: I1128 21:46:12.201536 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerID="565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab" exitCode=0 Nov 28 21:46:12 crc kubenswrapper[4957]: I1128 21:46:12.201610 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerDied","Data":"565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab"} Nov 28 21:46:13 crc kubenswrapper[4957]: I1128 21:46:13.212687 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerStarted","Data":"39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541"} Nov 28 21:46:13 crc kubenswrapper[4957]: I1128 21:46:13.232428 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9xl92" podStartSLOduration=2.696240096 podStartE2EDuration="6.232412419s" podCreationTimestamp="2025-11-28 21:46:07 +0000 UTC" firstStartedPulling="2025-11-28 21:46:09.170075666 +0000 UTC m=+3408.638723615" lastFinishedPulling="2025-11-28 21:46:12.706248029 +0000 UTC m=+3412.174895938" observedRunningTime="2025-11-28 21:46:13.23042003 +0000 UTC m=+3412.699067939" watchObservedRunningTime="2025-11-28 21:46:13.232412419 +0000 UTC m=+3412.701060328" Nov 28 21:46:17 crc kubenswrapper[4957]: I1128 21:46:17.838085 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:17 crc kubenswrapper[4957]: I1128 21:46:17.839400 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:17 crc kubenswrapper[4957]: I1128 21:46:17.914691 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:18 crc kubenswrapper[4957]: I1128 21:46:18.323236 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:18 crc kubenswrapper[4957]: I1128 21:46:18.376666 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xl92"] Nov 28 21:46:19 crc kubenswrapper[4957]: I1128 21:46:19.813364 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:46:19 crc kubenswrapper[4957]: E1128 21:46:19.813863 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.280712 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9xl92" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="registry-server" containerID="cri-o://39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541" gracePeriod=2 Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.822964 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.957717 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqjgq\" (UniqueName: \"kubernetes.io/projected/9ccca50e-23f1-4812-ad00-11b912830dd0-kube-api-access-rqjgq\") pod \"9ccca50e-23f1-4812-ad00-11b912830dd0\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.957812 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-catalog-content\") pod \"9ccca50e-23f1-4812-ad00-11b912830dd0\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.957851 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-utilities\") pod \"9ccca50e-23f1-4812-ad00-11b912830dd0\" (UID: \"9ccca50e-23f1-4812-ad00-11b912830dd0\") " Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.958554 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-utilities" (OuterVolumeSpecName: "utilities") pod "9ccca50e-23f1-4812-ad00-11b912830dd0" (UID: "9ccca50e-23f1-4812-ad00-11b912830dd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:46:20 crc kubenswrapper[4957]: I1128 21:46:20.963423 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccca50e-23f1-4812-ad00-11b912830dd0-kube-api-access-rqjgq" (OuterVolumeSpecName: "kube-api-access-rqjgq") pod "9ccca50e-23f1-4812-ad00-11b912830dd0" (UID: "9ccca50e-23f1-4812-ad00-11b912830dd0"). InnerVolumeSpecName "kube-api-access-rqjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.002828 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ccca50e-23f1-4812-ad00-11b912830dd0" (UID: "9ccca50e-23f1-4812-ad00-11b912830dd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.061061 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqjgq\" (UniqueName: \"kubernetes.io/projected/9ccca50e-23f1-4812-ad00-11b912830dd0-kube-api-access-rqjgq\") on node \"crc\" DevicePath \"\"" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.061103 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.061119 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccca50e-23f1-4812-ad00-11b912830dd0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.292932 4957 generic.go:334] "Generic (PLEG): container finished" podID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerID="39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541" exitCode=0 Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.292985 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerDied","Data":"39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541"} Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.293026 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xl92" event={"ID":"9ccca50e-23f1-4812-ad00-11b912830dd0","Type":"ContainerDied","Data":"d59360e28405a1a4634947908d1d7a1cde1eae79c118a4252ae34e4a38efe940"} Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.293047 4957 scope.go:117] "RemoveContainer" containerID="39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.293400 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xl92" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.324920 4957 scope.go:117] "RemoveContainer" containerID="565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.331725 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xl92"] Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.342616 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9xl92"] Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.353505 4957 scope.go:117] "RemoveContainer" containerID="82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.407608 4957 scope.go:117] "RemoveContainer" containerID="39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541" Nov 28 21:46:21 crc kubenswrapper[4957]: E1128 21:46:21.408487 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541\": container with ID starting with 39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541 not found: ID does not exist" containerID="39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.408543 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541"} err="failed to get container status \"39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541\": rpc error: code = NotFound desc = could not find container \"39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541\": container with ID starting with 39bd35d1571d507d115ba6ad38e9beff2a40bd69b2aa06771ede0de99848a541 not found: ID does not exist" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.408594 4957 scope.go:117] "RemoveContainer" containerID="565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab" Nov 28 21:46:21 crc kubenswrapper[4957]: E1128 21:46:21.409098 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab\": container with ID starting with 565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab not found: ID does not exist" containerID="565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.409138 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab"} err="failed to get container status \"565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab\": rpc error: code = NotFound desc = could not find container \"565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab\": container with ID starting with 565189052e210057c11c09cfd186819e8060383dce352699e981b21710ba02ab not found: ID does not exist" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.409164 4957 scope.go:117] "RemoveContainer" containerID="82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446" Nov 28 21:46:21 crc kubenswrapper[4957]: E1128 21:46:21.409444 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446\": container with ID starting with 82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446 not found: ID does not exist" containerID="82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446" Nov 28 21:46:21 crc kubenswrapper[4957]: I1128 21:46:21.409480 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446"} err="failed to get container status \"82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446\": rpc error: code = NotFound desc = could not find container \"82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446\": container with ID starting with 82223c2d402cb229a197cf16d7b834aeabcbe017739cf028571da6dcbe875446 not found: ID does not exist" Nov 28 21:46:22 crc kubenswrapper[4957]: I1128 21:46:22.827183 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" path="/var/lib/kubelet/pods/9ccca50e-23f1-4812-ad00-11b912830dd0/volumes" Nov 28 21:46:33 crc kubenswrapper[4957]: I1128 21:46:33.812849 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:46:33 crc kubenswrapper[4957]: E1128 21:46:33.813831 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:46:48 crc kubenswrapper[4957]: I1128 21:46:48.813847 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:46:49 crc kubenswrapper[4957]: I1128 21:46:49.615132 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"5e43e59bbb39069d6d335f1b61dba31bd85ea3a869624b2fe00037e1cc93f2ac"} Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.750067 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kppcc"] Nov 28 21:47:32 crc kubenswrapper[4957]: E1128 21:47:32.752082 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="extract-content" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.752159 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="extract-content" Nov 28 21:47:32 crc kubenswrapper[4957]: E1128 21:47:32.752238 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="extract-utilities" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.752299 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="extract-utilities" Nov 28 21:47:32 crc kubenswrapper[4957]: E1128 21:47:32.752400 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="registry-server" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.752750 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="registry-server" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.753048 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccca50e-23f1-4812-ad00-11b912830dd0" containerName="registry-server" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.754746 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.771580 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kppcc"] Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.888085 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwcfg\" (UniqueName: \"kubernetes.io/projected/332eae6c-c12a-4781-823b-6764453e2c62-kube-api-access-mwcfg\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.888143 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-catalog-content\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.888636 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-utilities\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.991901 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwcfg\" (UniqueName: \"kubernetes.io/projected/332eae6c-c12a-4781-823b-6764453e2c62-kube-api-access-mwcfg\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.991965 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-catalog-content\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.992086 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-utilities\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.992518 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-catalog-content\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:32 crc kubenswrapper[4957]: I1128 21:47:32.992575 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-utilities\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:33 crc kubenswrapper[4957]: I1128 21:47:33.012444 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwcfg\" (UniqueName: \"kubernetes.io/projected/332eae6c-c12a-4781-823b-6764453e2c62-kube-api-access-mwcfg\") pod \"redhat-marketplace-kppcc\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:33 crc kubenswrapper[4957]: I1128 21:47:33.091894 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:33 crc kubenswrapper[4957]: I1128 21:47:33.521058 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kppcc"] Nov 28 21:47:34 crc kubenswrapper[4957]: I1128 21:47:34.093005 4957 generic.go:334] "Generic (PLEG): container finished" podID="332eae6c-c12a-4781-823b-6764453e2c62" containerID="9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a" exitCode=0 Nov 28 21:47:34 crc kubenswrapper[4957]: I1128 21:47:34.093064 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kppcc" event={"ID":"332eae6c-c12a-4781-823b-6764453e2c62","Type":"ContainerDied","Data":"9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a"} Nov 28 21:47:34 crc kubenswrapper[4957]: I1128 21:47:34.093098 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kppcc" event={"ID":"332eae6c-c12a-4781-823b-6764453e2c62","Type":"ContainerStarted","Data":"e1e06799488f0bac0ae2bdc12fd49b4b20c58da8672e9ebe30e87b5d5443e99f"} Nov 28 21:47:36 crc kubenswrapper[4957]: I1128 21:47:36.143420 4957 generic.go:334] "Generic (PLEG): container finished" podID="332eae6c-c12a-4781-823b-6764453e2c62" containerID="09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e" exitCode=0 Nov 28 21:47:36 crc kubenswrapper[4957]: I1128 21:47:36.143536 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kppcc" event={"ID":"332eae6c-c12a-4781-823b-6764453e2c62","Type":"ContainerDied","Data":"09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e"} Nov 28 21:47:37 crc kubenswrapper[4957]: I1128 21:47:37.155179 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kppcc" event={"ID":"332eae6c-c12a-4781-823b-6764453e2c62","Type":"ContainerStarted","Data":"8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd"} Nov 28 21:47:37 crc kubenswrapper[4957]: I1128 21:47:37.184345 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kppcc" podStartSLOduration=2.660786335 podStartE2EDuration="5.184324093s" podCreationTimestamp="2025-11-28 21:47:32 +0000 UTC" firstStartedPulling="2025-11-28 21:47:34.095821221 +0000 UTC m=+3493.564469130" lastFinishedPulling="2025-11-28 21:47:36.619358959 +0000 UTC m=+3496.088006888" observedRunningTime="2025-11-28 21:47:37.178495519 +0000 UTC m=+3496.647143428" watchObservedRunningTime="2025-11-28 21:47:37.184324093 +0000 UTC m=+3496.652972012" Nov 28 21:47:43 crc kubenswrapper[4957]: I1128 21:47:43.092239 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:43 crc kubenswrapper[4957]: I1128 21:47:43.092739 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:43 crc kubenswrapper[4957]: I1128 21:47:43.141968 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:43 crc kubenswrapper[4957]: I1128 21:47:43.297564 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:43 crc kubenswrapper[4957]: I1128 21:47:43.379514 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kppcc"] Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.243543 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kppcc" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="registry-server" containerID="cri-o://8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd" gracePeriod=2 Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.815428 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.846496 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwcfg\" (UniqueName: \"kubernetes.io/projected/332eae6c-c12a-4781-823b-6764453e2c62-kube-api-access-mwcfg\") pod \"332eae6c-c12a-4781-823b-6764453e2c62\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.846569 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-catalog-content\") pod \"332eae6c-c12a-4781-823b-6764453e2c62\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.846674 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-utilities\") pod \"332eae6c-c12a-4781-823b-6764453e2c62\" (UID: \"332eae6c-c12a-4781-823b-6764453e2c62\") " Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.848814 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-utilities" (OuterVolumeSpecName: "utilities") pod "332eae6c-c12a-4781-823b-6764453e2c62" (UID: "332eae6c-c12a-4781-823b-6764453e2c62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.852652 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332eae6c-c12a-4781-823b-6764453e2c62-kube-api-access-mwcfg" (OuterVolumeSpecName: "kube-api-access-mwcfg") pod "332eae6c-c12a-4781-823b-6764453e2c62" (UID: "332eae6c-c12a-4781-823b-6764453e2c62"). InnerVolumeSpecName "kube-api-access-mwcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.867169 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "332eae6c-c12a-4781-823b-6764453e2c62" (UID: "332eae6c-c12a-4781-823b-6764453e2c62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.948685 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.948718 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwcfg\" (UniqueName: \"kubernetes.io/projected/332eae6c-c12a-4781-823b-6764453e2c62-kube-api-access-mwcfg\") on node \"crc\" DevicePath \"\"" Nov 28 21:47:45 crc kubenswrapper[4957]: I1128 21:47:45.948730 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/332eae6c-c12a-4781-823b-6764453e2c62-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.255842 4957 generic.go:334] "Generic (PLEG): container finished" podID="332eae6c-c12a-4781-823b-6764453e2c62" containerID="8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd" exitCode=0 Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.255883 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kppcc" event={"ID":"332eae6c-c12a-4781-823b-6764453e2c62","Type":"ContainerDied","Data":"8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd"} Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.255915 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kppcc" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.255936 4957 scope.go:117] "RemoveContainer" containerID="8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.255923 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kppcc" event={"ID":"332eae6c-c12a-4781-823b-6764453e2c62","Type":"ContainerDied","Data":"e1e06799488f0bac0ae2bdc12fd49b4b20c58da8672e9ebe30e87b5d5443e99f"} Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.280954 4957 scope.go:117] "RemoveContainer" containerID="09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.294428 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kppcc"] Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.308063 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kppcc"] Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.317509 4957 scope.go:117] "RemoveContainer" containerID="9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.388279 4957 scope.go:117] "RemoveContainer" containerID="8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd" Nov 28 21:47:46 crc kubenswrapper[4957]: E1128 21:47:46.388818 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd\": container with ID starting with 8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd not found: ID does not exist" containerID="8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.388945 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd"} err="failed to get container status \"8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd\": rpc error: code = NotFound desc = could not find container \"8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd\": container with ID starting with 8864b5897fa1c0cc0d7e9e6c9131825c7b2da396b5beb1aa39de6edbeead81cd not found: ID does not exist" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.389033 4957 scope.go:117] "RemoveContainer" containerID="09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e" Nov 28 21:47:46 crc kubenswrapper[4957]: E1128 21:47:46.389536 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e\": container with ID starting with 09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e not found: ID does not exist" containerID="09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.389584 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e"} err="failed to get container status \"09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e\": rpc error: code = NotFound desc = could not find container \"09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e\": container with ID starting with 09409a935faeb171534694b1e9c313ff36487cb069a8d00ae8941bbd22672d1e not found: ID does not exist" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.389605 4957 scope.go:117] "RemoveContainer" containerID="9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a" Nov 28 21:47:46 crc kubenswrapper[4957]: E1128 21:47:46.389878 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a\": container with ID starting with 9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a not found: ID does not exist" containerID="9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.389960 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a"} err="failed to get container status \"9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a\": rpc error: code = NotFound desc = could not find container \"9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a\": container with ID starting with 9fae8fefe3d2169b1d0921ca5ba3499e01e1159f9642ac3ef450fbd1aad7c01a not found: ID does not exist" Nov 28 21:47:46 crc kubenswrapper[4957]: I1128 21:47:46.826231 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332eae6c-c12a-4781-823b-6764453e2c62" path="/var/lib/kubelet/pods/332eae6c-c12a-4781-823b-6764453e2c62/volumes" Nov 28 21:49:08 crc kubenswrapper[4957]: I1128 21:49:08.992601 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:49:08 crc kubenswrapper[4957]: I1128 21:49:08.993186 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:49:38 crc kubenswrapper[4957]: I1128 21:49:38.992031 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:49:38 crc kubenswrapper[4957]: I1128 21:49:38.992918 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:50:08 crc kubenswrapper[4957]: I1128 21:50:08.992665 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:50:08 crc kubenswrapper[4957]: I1128 21:50:08.993198 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:50:08 crc kubenswrapper[4957]: I1128 21:50:08.993254 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:50:08 crc kubenswrapper[4957]: I1128 21:50:08.994093 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e43e59bbb39069d6d335f1b61dba31bd85ea3a869624b2fe00037e1cc93f2ac"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:50:08 crc kubenswrapper[4957]: I1128 21:50:08.994150 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://5e43e59bbb39069d6d335f1b61dba31bd85ea3a869624b2fe00037e1cc93f2ac" gracePeriod=600 Nov 28 21:50:09 crc kubenswrapper[4957]: I1128 21:50:09.759079 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="5e43e59bbb39069d6d335f1b61dba31bd85ea3a869624b2fe00037e1cc93f2ac" exitCode=0 Nov 28 21:50:09 crc kubenswrapper[4957]: I1128 21:50:09.759180 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"5e43e59bbb39069d6d335f1b61dba31bd85ea3a869624b2fe00037e1cc93f2ac"} Nov 28 21:50:09 crc kubenswrapper[4957]: I1128 21:50:09.759725 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7"} Nov 28 21:50:09 crc kubenswrapper[4957]: I1128 21:50:09.759758 4957 scope.go:117] "RemoveContainer" containerID="acf9e42f2c32db4ca2f6de616d9c7391d03f80deef391c2a0d9df9d76394827c" Nov 28 21:52:38 crc kubenswrapper[4957]: I1128 21:52:38.992090 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:52:38 crc kubenswrapper[4957]: I1128 21:52:38.992775 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:53:08 crc kubenswrapper[4957]: I1128 21:53:08.992202 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:53:08 crc kubenswrapper[4957]: I1128 21:53:08.992768 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:53:38 crc kubenswrapper[4957]: I1128 21:53:38.992281 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 21:53:38 crc kubenswrapper[4957]: I1128 21:53:38.992839 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 21:53:38 crc kubenswrapper[4957]: I1128 21:53:38.992886 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 21:53:38 crc kubenswrapper[4957]: I1128 21:53:38.993722 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 21:53:38 crc kubenswrapper[4957]: I1128 21:53:38.993775 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" gracePeriod=600 Nov 28 21:53:39 crc kubenswrapper[4957]: E1128 21:53:39.115301 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:53:39 crc kubenswrapper[4957]: I1128 21:53:39.199632 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" exitCode=0 Nov 28 21:53:39 crc kubenswrapper[4957]: I1128 21:53:39.199668 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7"} Nov 28 21:53:39 crc kubenswrapper[4957]: I1128 21:53:39.199965 4957 scope.go:117] "RemoveContainer" containerID="5e43e59bbb39069d6d335f1b61dba31bd85ea3a869624b2fe00037e1cc93f2ac" Nov 28 21:53:39 crc kubenswrapper[4957]: I1128 21:53:39.201318 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:53:39 crc kubenswrapper[4957]: E1128 21:53:39.201777 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:53:52 crc kubenswrapper[4957]: I1128 21:53:52.812869 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:53:52 crc kubenswrapper[4957]: E1128 21:53:52.813622 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:54:06 crc kubenswrapper[4957]: I1128 21:54:06.813361 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:54:06 crc kubenswrapper[4957]: E1128 21:54:06.814103 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.936981 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8pgmc"] Nov 28 21:54:07 crc kubenswrapper[4957]: E1128 21:54:07.937789 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="extract-utilities" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.937804 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="extract-utilities" Nov 28 21:54:07 crc kubenswrapper[4957]: E1128 21:54:07.937838 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="extract-content" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.937844 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="extract-content" Nov 28 21:54:07 crc kubenswrapper[4957]: E1128 21:54:07.937859 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="registry-server" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.937865 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="registry-server" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.938107 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="332eae6c-c12a-4781-823b-6764453e2c62" containerName="registry-server" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.939818 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:07 crc kubenswrapper[4957]: I1128 21:54:07.952344 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pgmc"] Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.096689 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-utilities\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.097145 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-catalog-content\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.097173 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5ltp\" (UniqueName: \"kubernetes.io/projected/59bd2454-a0c2-48e3-bb81-11a1267ad67c-kube-api-access-h5ltp\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.198819 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-utilities\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.198961 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-catalog-content\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.199014 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5ltp\" (UniqueName: \"kubernetes.io/projected/59bd2454-a0c2-48e3-bb81-11a1267ad67c-kube-api-access-h5ltp\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.199343 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-catalog-content\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.199369 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-utilities\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.224471 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5ltp\" (UniqueName: \"kubernetes.io/projected/59bd2454-a0c2-48e3-bb81-11a1267ad67c-kube-api-access-h5ltp\") pod \"redhat-operators-8pgmc\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.278332 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:08 crc kubenswrapper[4957]: I1128 21:54:08.780322 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pgmc"] Nov 28 21:54:09 crc kubenswrapper[4957]: I1128 21:54:09.529534 4957 generic.go:334] "Generic (PLEG): container finished" podID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerID="ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488" exitCode=0 Nov 28 21:54:09 crc kubenswrapper[4957]: I1128 21:54:09.529643 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerDied","Data":"ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488"} Nov 28 21:54:09 crc kubenswrapper[4957]: I1128 21:54:09.530021 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerStarted","Data":"54c00969145fd317d0356d3f1c4e57fd831da141003fe1c49b9b05edf824259c"} Nov 28 21:54:09 crc kubenswrapper[4957]: I1128 21:54:09.532251 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 21:54:10 crc kubenswrapper[4957]: I1128 21:54:10.540947 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerStarted","Data":"e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2"} Nov 28 21:54:13 crc kubenswrapper[4957]: I1128 21:54:13.571358 4957 generic.go:334] "Generic (PLEG): container finished" podID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerID="e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2" exitCode=0 Nov 28 21:54:13 crc kubenswrapper[4957]: I1128 21:54:13.571528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerDied","Data":"e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2"} Nov 28 21:54:14 crc kubenswrapper[4957]: I1128 21:54:14.583776 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerStarted","Data":"9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0"} Nov 28 21:54:14 crc kubenswrapper[4957]: I1128 21:54:14.614015 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8pgmc" podStartSLOduration=2.964084194 podStartE2EDuration="7.613996151s" podCreationTimestamp="2025-11-28 21:54:07 +0000 UTC" firstStartedPulling="2025-11-28 21:54:09.53194762 +0000 UTC m=+3889.000595529" lastFinishedPulling="2025-11-28 21:54:14.181859577 +0000 UTC m=+3893.650507486" observedRunningTime="2025-11-28 21:54:14.604119187 +0000 UTC m=+3894.072767096" watchObservedRunningTime="2025-11-28 21:54:14.613996151 +0000 UTC m=+3894.082644060" Nov 28 21:54:18 crc kubenswrapper[4957]: I1128 21:54:18.278722 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:18 crc kubenswrapper[4957]: I1128 21:54:18.279374 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:18 crc kubenswrapper[4957]: I1128 21:54:18.813779 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:54:18 crc kubenswrapper[4957]: E1128 21:54:18.814323 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:54:19 crc kubenswrapper[4957]: I1128 21:54:19.335181 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8pgmc" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="registry-server" probeResult="failure" output=< Nov 28 21:54:19 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 21:54:19 crc kubenswrapper[4957]: > Nov 28 21:54:28 crc kubenswrapper[4957]: I1128 21:54:28.326806 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:28 crc kubenswrapper[4957]: I1128 21:54:28.378475 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:28 crc kubenswrapper[4957]: I1128 21:54:28.559454 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pgmc"] Nov 28 21:54:29 crc kubenswrapper[4957]: I1128 21:54:29.741724 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8pgmc" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="registry-server" containerID="cri-o://9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0" gracePeriod=2 Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.286472 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.310124 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-catalog-content\") pod \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.310335 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5ltp\" (UniqueName: \"kubernetes.io/projected/59bd2454-a0c2-48e3-bb81-11a1267ad67c-kube-api-access-h5ltp\") pod \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.310447 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-utilities\") pod \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\" (UID: \"59bd2454-a0c2-48e3-bb81-11a1267ad67c\") " Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.311154 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-utilities" (OuterVolumeSpecName: "utilities") pod "59bd2454-a0c2-48e3-bb81-11a1267ad67c" (UID: "59bd2454-a0c2-48e3-bb81-11a1267ad67c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.311417 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.317407 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bd2454-a0c2-48e3-bb81-11a1267ad67c-kube-api-access-h5ltp" (OuterVolumeSpecName: "kube-api-access-h5ltp") pod "59bd2454-a0c2-48e3-bb81-11a1267ad67c" (UID: "59bd2454-a0c2-48e3-bb81-11a1267ad67c"). InnerVolumeSpecName "kube-api-access-h5ltp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.413478 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5ltp\" (UniqueName: \"kubernetes.io/projected/59bd2454-a0c2-48e3-bb81-11a1267ad67c-kube-api-access-h5ltp\") on node \"crc\" DevicePath \"\"" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.431202 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59bd2454-a0c2-48e3-bb81-11a1267ad67c" (UID: "59bd2454-a0c2-48e3-bb81-11a1267ad67c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.514843 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bd2454-a0c2-48e3-bb81-11a1267ad67c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.753269 4957 generic.go:334] "Generic (PLEG): container finished" podID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerID="9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0" exitCode=0 Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.753328 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerDied","Data":"9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0"} Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.753356 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pgmc" event={"ID":"59bd2454-a0c2-48e3-bb81-11a1267ad67c","Type":"ContainerDied","Data":"54c00969145fd317d0356d3f1c4e57fd831da141003fe1c49b9b05edf824259c"} Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.753373 4957 scope.go:117] "RemoveContainer" containerID="9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.753595 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pgmc" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.776022 4957 scope.go:117] "RemoveContainer" containerID="e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.787383 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pgmc"] Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.802367 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8pgmc"] Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.808106 4957 scope.go:117] "RemoveContainer" containerID="ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.829751 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" path="/var/lib/kubelet/pods/59bd2454-a0c2-48e3-bb81-11a1267ad67c/volumes" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.856953 4957 scope.go:117] "RemoveContainer" containerID="9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0" Nov 28 21:54:30 crc kubenswrapper[4957]: E1128 21:54:30.857488 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0\": container with ID starting with 9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0 not found: ID does not exist" containerID="9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.857517 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0"} err="failed to get container status \"9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0\": rpc error: code = NotFound desc = could not find container \"9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0\": container with ID starting with 9e69cc8b2494acec7c6b597fa14f2105fac04bad5ae7b6bd790d0286d4da5ae0 not found: ID does not exist" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.857537 4957 scope.go:117] "RemoveContainer" containerID="e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2" Nov 28 21:54:30 crc kubenswrapper[4957]: E1128 21:54:30.857944 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2\": container with ID starting with e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2 not found: ID does not exist" containerID="e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.857986 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2"} err="failed to get container status \"e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2\": rpc error: code = NotFound desc = could not find container \"e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2\": container with ID starting with e222f14146c2b7472bdd8cb90acea012254b5e5b5a79c4609fe4240f5206f9f2 not found: ID does not exist" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.858016 4957 scope.go:117] "RemoveContainer" containerID="ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488" Nov 28 21:54:30 crc kubenswrapper[4957]: E1128 21:54:30.858483 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488\": container with ID starting with ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488 not found: ID does not exist" containerID="ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488" Nov 28 21:54:30 crc kubenswrapper[4957]: I1128 21:54:30.858510 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488"} err="failed to get container status \"ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488\": rpc error: code = NotFound desc = could not find container \"ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488\": container with ID starting with ef40a63f697cff1ea6f360db4ecc1aa6dab105c7be9d9d6fa89f554804814488 not found: ID does not exist" Nov 28 21:54:32 crc kubenswrapper[4957]: I1128 21:54:32.820240 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:54:32 crc kubenswrapper[4957]: E1128 21:54:32.820943 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:54:44 crc kubenswrapper[4957]: E1128 21:54:44.344361 4957 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.111:43760->38.102.83.111:40891: read tcp 38.102.83.111:43760->38.102.83.111:40891: read: connection reset by peer Nov 28 21:54:46 crc kubenswrapper[4957]: I1128 21:54:46.813633 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:54:46 crc kubenswrapper[4957]: E1128 21:54:46.814689 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:54:49 crc kubenswrapper[4957]: I1128 21:54:49.231420 4957 trace.go:236] Trace[1484590842]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (28-Nov-2025 21:54:48.219) (total time: 1011ms): Nov 28 21:54:49 crc kubenswrapper[4957]: Trace[1484590842]: [1.011887321s] [1.011887321s] END Nov 28 21:54:59 crc kubenswrapper[4957]: I1128 21:54:59.814067 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:54:59 crc kubenswrapper[4957]: E1128 21:54:59.814909 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:55:10 crc kubenswrapper[4957]: I1128 21:55:10.836023 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:55:10 crc kubenswrapper[4957]: E1128 21:55:10.838025 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:55:23 crc kubenswrapper[4957]: I1128 21:55:23.812758 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:55:23 crc kubenswrapper[4957]: E1128 21:55:23.813535 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:55:35 crc kubenswrapper[4957]: I1128 21:55:35.813775 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:55:35 crc kubenswrapper[4957]: E1128 21:55:35.814819 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:55:48 crc kubenswrapper[4957]: I1128 21:55:48.813199 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:55:48 crc kubenswrapper[4957]: E1128 21:55:48.815363 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:56:02 crc kubenswrapper[4957]: I1128 21:56:02.812916 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:56:02 crc kubenswrapper[4957]: E1128 21:56:02.813669 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.304251 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fq2qm"] Nov 28 21:56:11 crc kubenswrapper[4957]: E1128 21:56:11.305466 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="registry-server" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.305485 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="registry-server" Nov 28 21:56:11 crc kubenswrapper[4957]: E1128 21:56:11.305515 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="extract-content" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.305523 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="extract-content" Nov 28 21:56:11 crc kubenswrapper[4957]: E1128 21:56:11.305552 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="extract-utilities" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.305561 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="extract-utilities" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.305831 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bd2454-a0c2-48e3-bb81-11a1267ad67c" containerName="registry-server" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.308351 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.321305 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq2qm"] Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.463161 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w958q\" (UniqueName: \"kubernetes.io/projected/e48d64aa-13e0-404e-ab9b-b87553957611-kube-api-access-w958q\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.463452 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-catalog-content\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.463770 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-utilities\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.565898 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w958q\" (UniqueName: \"kubernetes.io/projected/e48d64aa-13e0-404e-ab9b-b87553957611-kube-api-access-w958q\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.566070 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-catalog-content\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.566174 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-utilities\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.566913 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-catalog-content\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.566985 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-utilities\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.588384 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w958q\" (UniqueName: \"kubernetes.io/projected/e48d64aa-13e0-404e-ab9b-b87553957611-kube-api-access-w958q\") pod \"certified-operators-fq2qm\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:11 crc kubenswrapper[4957]: I1128 21:56:11.644775 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:12 crc kubenswrapper[4957]: I1128 21:56:12.165117 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq2qm"] Nov 28 21:56:12 crc kubenswrapper[4957]: I1128 21:56:12.834923 4957 generic.go:334] "Generic (PLEG): container finished" podID="e48d64aa-13e0-404e-ab9b-b87553957611" containerID="26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f" exitCode=0 Nov 28 21:56:12 crc kubenswrapper[4957]: I1128 21:56:12.835021 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerDied","Data":"26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f"} Nov 28 21:56:12 crc kubenswrapper[4957]: I1128 21:56:12.835518 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerStarted","Data":"7ba4d50751544267552b076df7c7b8f5161be6e71769d3684a3636c34b2203bc"} Nov 28 21:56:13 crc kubenswrapper[4957]: I1128 21:56:13.849129 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerStarted","Data":"e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b"} Nov 28 21:56:14 crc kubenswrapper[4957]: I1128 21:56:14.862405 4957 generic.go:334] "Generic (PLEG): container finished" podID="e48d64aa-13e0-404e-ab9b-b87553957611" containerID="e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b" exitCode=0 Nov 28 21:56:14 crc kubenswrapper[4957]: I1128 21:56:14.862519 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerDied","Data":"e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b"} Nov 28 21:56:15 crc kubenswrapper[4957]: I1128 21:56:15.875739 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerStarted","Data":"dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5"} Nov 28 21:56:15 crc kubenswrapper[4957]: I1128 21:56:15.898693 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fq2qm" podStartSLOduration=2.433323687 podStartE2EDuration="4.898667776s" podCreationTimestamp="2025-11-28 21:56:11 +0000 UTC" firstStartedPulling="2025-11-28 21:56:12.837286627 +0000 UTC m=+4012.305934536" lastFinishedPulling="2025-11-28 21:56:15.302630716 +0000 UTC m=+4014.771278625" observedRunningTime="2025-11-28 21:56:15.891366205 +0000 UTC m=+4015.360014114" watchObservedRunningTime="2025-11-28 21:56:15.898667776 +0000 UTC m=+4015.367315685" Nov 28 21:56:17 crc kubenswrapper[4957]: I1128 21:56:17.813416 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:56:17 crc kubenswrapper[4957]: E1128 21:56:17.813994 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:56:21 crc kubenswrapper[4957]: I1128 21:56:21.644876 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:21 crc kubenswrapper[4957]: I1128 21:56:21.645709 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:21 crc kubenswrapper[4957]: I1128 21:56:21.727695 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:22 crc kubenswrapper[4957]: I1128 21:56:22.025085 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:22 crc kubenswrapper[4957]: I1128 21:56:22.077759 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq2qm"] Nov 28 21:56:23 crc kubenswrapper[4957]: I1128 21:56:23.962976 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fq2qm" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="registry-server" containerID="cri-o://dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5" gracePeriod=2 Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.377367 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bwtf7"] Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.380879 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.396774 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwtf7"] Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.483145 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-catalog-content\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.483328 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-utilities\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.483611 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26gt\" (UniqueName: \"kubernetes.io/projected/673dd14a-7bf1-423c-9c51-e67c74277835-kube-api-access-c26gt\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.525843 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.585833 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-utilities\") pod \"e48d64aa-13e0-404e-ab9b-b87553957611\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.586011 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w958q\" (UniqueName: \"kubernetes.io/projected/e48d64aa-13e0-404e-ab9b-b87553957611-kube-api-access-w958q\") pod \"e48d64aa-13e0-404e-ab9b-b87553957611\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.586067 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-catalog-content\") pod \"e48d64aa-13e0-404e-ab9b-b87553957611\" (UID: \"e48d64aa-13e0-404e-ab9b-b87553957611\") " Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.586919 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-catalog-content\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.586996 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-utilities\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.587096 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c26gt\" (UniqueName: \"kubernetes.io/projected/673dd14a-7bf1-423c-9c51-e67c74277835-kube-api-access-c26gt\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.587408 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-utilities" (OuterVolumeSpecName: "utilities") pod "e48d64aa-13e0-404e-ab9b-b87553957611" (UID: "e48d64aa-13e0-404e-ab9b-b87553957611"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.587858 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-catalog-content\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.588177 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-utilities\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.596510 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48d64aa-13e0-404e-ab9b-b87553957611-kube-api-access-w958q" (OuterVolumeSpecName: "kube-api-access-w958q") pod "e48d64aa-13e0-404e-ab9b-b87553957611" (UID: "e48d64aa-13e0-404e-ab9b-b87553957611"). InnerVolumeSpecName "kube-api-access-w958q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.610163 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c26gt\" (UniqueName: \"kubernetes.io/projected/673dd14a-7bf1-423c-9c51-e67c74277835-kube-api-access-c26gt\") pod \"community-operators-bwtf7\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.648081 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e48d64aa-13e0-404e-ab9b-b87553957611" (UID: "e48d64aa-13e0-404e-ab9b-b87553957611"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.689846 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.689889 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w958q\" (UniqueName: \"kubernetes.io/projected/e48d64aa-13e0-404e-ab9b-b87553957611-kube-api-access-w958q\") on node \"crc\" DevicePath \"\"" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.689903 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e48d64aa-13e0-404e-ab9b-b87553957611-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.825646 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.978349 4957 generic.go:334] "Generic (PLEG): container finished" podID="e48d64aa-13e0-404e-ab9b-b87553957611" containerID="dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5" exitCode=0 Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.978441 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq2qm" Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.978458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerDied","Data":"dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5"} Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.979265 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq2qm" event={"ID":"e48d64aa-13e0-404e-ab9b-b87553957611","Type":"ContainerDied","Data":"7ba4d50751544267552b076df7c7b8f5161be6e71769d3684a3636c34b2203bc"} Nov 28 21:56:24 crc kubenswrapper[4957]: I1128 21:56:24.979293 4957 scope.go:117] "RemoveContainer" containerID="dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.010403 4957 scope.go:117] "RemoveContainer" containerID="e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.013691 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq2qm"] Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.024171 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fq2qm"] Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.035256 4957 scope.go:117] "RemoveContainer" containerID="26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.057358 4957 scope.go:117] "RemoveContainer" containerID="dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5" Nov 28 21:56:25 crc kubenswrapper[4957]: E1128 21:56:25.057851 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5\": container with ID starting with dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5 not found: ID does not exist" containerID="dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.057903 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5"} err="failed to get container status \"dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5\": rpc error: code = NotFound desc = could not find container \"dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5\": container with ID starting with dd9ff24393839ca58e64a80c1f08018c7f9979db5b6241935548e7721690e6f5 not found: ID does not exist" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.057932 4957 scope.go:117] "RemoveContainer" containerID="e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b" Nov 28 21:56:25 crc kubenswrapper[4957]: E1128 21:56:25.058431 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b\": container with ID starting with e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b not found: ID does not exist" containerID="e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.058467 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b"} err="failed to get container status \"e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b\": rpc error: code = NotFound desc = could not find container \"e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b\": container with ID starting with e4ff64f3cbb684ef27fc941d14e0401d0ec04d6bd3b890c6b2b378ec2fd5de2b not found: ID does not exist" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.058489 4957 scope.go:117] "RemoveContainer" containerID="26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f" Nov 28 21:56:25 crc kubenswrapper[4957]: E1128 21:56:25.058731 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f\": container with ID starting with 26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f not found: ID does not exist" containerID="26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.058753 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f"} err="failed to get container status \"26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f\": rpc error: code = NotFound desc = could not find container \"26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f\": container with ID starting with 26e279e4e91d63901e4bf8800b0a113d58cfd2039c65e1ce9b82cbdfb99ee56f not found: ID does not exist" Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.280165 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwtf7"] Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.989372 4957 generic.go:334] "Generic (PLEG): container finished" podID="673dd14a-7bf1-423c-9c51-e67c74277835" containerID="0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e" exitCode=0 Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.989432 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerDied","Data":"0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e"} Nov 28 21:56:25 crc kubenswrapper[4957]: I1128 21:56:25.989702 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerStarted","Data":"55ea22365afc8df3044ed6b82bfa899dd9d1dc8c2d4e494be8279c5ba4bd0f2d"} Nov 28 21:56:26 crc kubenswrapper[4957]: I1128 21:56:26.825036 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" path="/var/lib/kubelet/pods/e48d64aa-13e0-404e-ab9b-b87553957611/volumes" Nov 28 21:56:28 crc kubenswrapper[4957]: I1128 21:56:28.020429 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerStarted","Data":"7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1"} Nov 28 21:56:29 crc kubenswrapper[4957]: I1128 21:56:29.038072 4957 generic.go:334] "Generic (PLEG): container finished" podID="673dd14a-7bf1-423c-9c51-e67c74277835" containerID="7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1" exitCode=0 Nov 28 21:56:29 crc kubenswrapper[4957]: I1128 21:56:29.038136 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerDied","Data":"7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1"} Nov 28 21:56:30 crc kubenswrapper[4957]: I1128 21:56:30.062716 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerStarted","Data":"3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41"} Nov 28 21:56:30 crc kubenswrapper[4957]: I1128 21:56:30.098894 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bwtf7" podStartSLOduration=2.569997443 podStartE2EDuration="6.098874236s" podCreationTimestamp="2025-11-28 21:56:24 +0000 UTC" firstStartedPulling="2025-11-28 21:56:25.992726359 +0000 UTC m=+4025.461374268" lastFinishedPulling="2025-11-28 21:56:29.521603152 +0000 UTC m=+4028.990251061" observedRunningTime="2025-11-28 21:56:30.089105644 +0000 UTC m=+4029.557753603" watchObservedRunningTime="2025-11-28 21:56:30.098874236 +0000 UTC m=+4029.567522145" Nov 28 21:56:32 crc kubenswrapper[4957]: I1128 21:56:32.813158 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:56:32 crc kubenswrapper[4957]: E1128 21:56:32.813755 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:56:34 crc kubenswrapper[4957]: I1128 21:56:34.826648 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:34 crc kubenswrapper[4957]: I1128 21:56:34.827105 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:34 crc kubenswrapper[4957]: I1128 21:56:34.885190 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:35 crc kubenswrapper[4957]: I1128 21:56:35.165521 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:35 crc kubenswrapper[4957]: I1128 21:56:35.217079 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bwtf7"] Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.134784 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bwtf7" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="registry-server" containerID="cri-o://3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41" gracePeriod=2 Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.678631 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.784510 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-utilities\") pod \"673dd14a-7bf1-423c-9c51-e67c74277835\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.784568 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-catalog-content\") pod \"673dd14a-7bf1-423c-9c51-e67c74277835\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.784641 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c26gt\" (UniqueName: \"kubernetes.io/projected/673dd14a-7bf1-423c-9c51-e67c74277835-kube-api-access-c26gt\") pod \"673dd14a-7bf1-423c-9c51-e67c74277835\" (UID: \"673dd14a-7bf1-423c-9c51-e67c74277835\") " Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.785524 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-utilities" (OuterVolumeSpecName: "utilities") pod "673dd14a-7bf1-423c-9c51-e67c74277835" (UID: "673dd14a-7bf1-423c-9c51-e67c74277835"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.794364 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673dd14a-7bf1-423c-9c51-e67c74277835-kube-api-access-c26gt" (OuterVolumeSpecName: "kube-api-access-c26gt") pod "673dd14a-7bf1-423c-9c51-e67c74277835" (UID: "673dd14a-7bf1-423c-9c51-e67c74277835"). InnerVolumeSpecName "kube-api-access-c26gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.857791 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "673dd14a-7bf1-423c-9c51-e67c74277835" (UID: "673dd14a-7bf1-423c-9c51-e67c74277835"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.887798 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.887851 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c26gt\" (UniqueName: \"kubernetes.io/projected/673dd14a-7bf1-423c-9c51-e67c74277835-kube-api-access-c26gt\") on node \"crc\" DevicePath \"\"" Nov 28 21:56:37 crc kubenswrapper[4957]: I1128 21:56:37.887870 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673dd14a-7bf1-423c-9c51-e67c74277835-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.149086 4957 generic.go:334] "Generic (PLEG): container finished" podID="673dd14a-7bf1-423c-9c51-e67c74277835" containerID="3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41" exitCode=0 Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.149138 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerDied","Data":"3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41"} Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.149171 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwtf7" event={"ID":"673dd14a-7bf1-423c-9c51-e67c74277835","Type":"ContainerDied","Data":"55ea22365afc8df3044ed6b82bfa899dd9d1dc8c2d4e494be8279c5ba4bd0f2d"} Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.149197 4957 scope.go:117] "RemoveContainer" containerID="3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.149191 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwtf7" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.175773 4957 scope.go:117] "RemoveContainer" containerID="7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.190231 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bwtf7"] Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.201797 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bwtf7"] Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.208410 4957 scope.go:117] "RemoveContainer" containerID="0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.259001 4957 scope.go:117] "RemoveContainer" containerID="3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41" Nov 28 21:56:38 crc kubenswrapper[4957]: E1128 21:56:38.259578 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41\": container with ID starting with 3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41 not found: ID does not exist" containerID="3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.259616 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41"} err="failed to get container status \"3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41\": rpc error: code = NotFound desc = could not find container \"3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41\": container with ID starting with 3925dcf158796ce72242ecca25862e26714b85dc85d4ed828c499922706edf41 not found: ID does not exist" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.259636 4957 scope.go:117] "RemoveContainer" containerID="7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1" Nov 28 21:56:38 crc kubenswrapper[4957]: E1128 21:56:38.260019 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1\": container with ID starting with 7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1 not found: ID does not exist" containerID="7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.260051 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1"} err="failed to get container status \"7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1\": rpc error: code = NotFound desc = could not find container \"7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1\": container with ID starting with 7993fb6dff3d9dd735a033697e64d034d3480094bfd5b8bfa912dbd2154590f1 not found: ID does not exist" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.260068 4957 scope.go:117] "RemoveContainer" containerID="0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e" Nov 28 21:56:38 crc kubenswrapper[4957]: E1128 21:56:38.260421 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e\": container with ID starting with 0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e not found: ID does not exist" containerID="0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.260442 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e"} err="failed to get container status \"0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e\": rpc error: code = NotFound desc = could not find container \"0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e\": container with ID starting with 0a4bf7f6628837568952de5c6c28a68b951bbe40db0511fbea9c57173c4fdd2e not found: ID does not exist" Nov 28 21:56:38 crc kubenswrapper[4957]: I1128 21:56:38.835586 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" path="/var/lib/kubelet/pods/673dd14a-7bf1-423c-9c51-e67c74277835/volumes" Nov 28 21:56:44 crc kubenswrapper[4957]: I1128 21:56:44.813982 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:56:44 crc kubenswrapper[4957]: E1128 21:56:44.814932 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:56:55 crc kubenswrapper[4957]: I1128 21:56:55.814493 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:56:55 crc kubenswrapper[4957]: E1128 21:56:55.815825 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:57:09 crc kubenswrapper[4957]: I1128 21:57:09.813110 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:57:09 crc kubenswrapper[4957]: E1128 21:57:09.814178 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:57:21 crc kubenswrapper[4957]: I1128 21:57:21.813548 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:57:21 crc kubenswrapper[4957]: E1128 21:57:21.814996 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:57:36 crc kubenswrapper[4957]: I1128 21:57:36.813795 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:57:36 crc kubenswrapper[4957]: E1128 21:57:36.815118 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.723983 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xm5pw"] Nov 28 21:57:45 crc kubenswrapper[4957]: E1128 21:57:45.725065 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="extract-content" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725081 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="extract-content" Nov 28 21:57:45 crc kubenswrapper[4957]: E1128 21:57:45.725101 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="extract-utilities" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725108 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="extract-utilities" Nov 28 21:57:45 crc kubenswrapper[4957]: E1128 21:57:45.725120 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="registry-server" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725126 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="registry-server" Nov 28 21:57:45 crc kubenswrapper[4957]: E1128 21:57:45.725142 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="registry-server" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725150 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="registry-server" Nov 28 21:57:45 crc kubenswrapper[4957]: E1128 21:57:45.725169 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="extract-utilities" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725175 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="extract-utilities" Nov 28 21:57:45 crc kubenswrapper[4957]: E1128 21:57:45.725234 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="extract-content" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725241 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="extract-content" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725436 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="673dd14a-7bf1-423c-9c51-e67c74277835" containerName="registry-server" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.725464 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48d64aa-13e0-404e-ab9b-b87553957611" containerName="registry-server" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.727436 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.746760 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xm5pw"] Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.799026 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-utilities\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.799564 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97xp\" (UniqueName: \"kubernetes.io/projected/f8806871-0851-409e-a084-d83cec432132-kube-api-access-b97xp\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.799738 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-catalog-content\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.901858 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-catalog-content\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.902001 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-utilities\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.902026 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97xp\" (UniqueName: \"kubernetes.io/projected/f8806871-0851-409e-a084-d83cec432132-kube-api-access-b97xp\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.902456 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-catalog-content\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.902760 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-utilities\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:45 crc kubenswrapper[4957]: I1128 21:57:45.923954 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97xp\" (UniqueName: \"kubernetes.io/projected/f8806871-0851-409e-a084-d83cec432132-kube-api-access-b97xp\") pod \"redhat-marketplace-xm5pw\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:46 crc kubenswrapper[4957]: I1128 21:57:46.060248 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:47 crc kubenswrapper[4957]: I1128 21:57:47.138453 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xm5pw"] Nov 28 21:57:47 crc kubenswrapper[4957]: W1128 21:57:47.154755 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8806871_0851_409e_a084_d83cec432132.slice/crio-bf2694c0b4a43033268d79e9bcaf2a92c36b32582035df2fc2f451f1f24fede5 WatchSource:0}: Error finding container bf2694c0b4a43033268d79e9bcaf2a92c36b32582035df2fc2f451f1f24fede5: Status 404 returned error can't find the container with id bf2694c0b4a43033268d79e9bcaf2a92c36b32582035df2fc2f451f1f24fede5 Nov 28 21:57:48 crc kubenswrapper[4957]: I1128 21:57:48.039128 4957 generic.go:334] "Generic (PLEG): container finished" podID="f8806871-0851-409e-a084-d83cec432132" containerID="83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b" exitCode=0 Nov 28 21:57:48 crc kubenswrapper[4957]: I1128 21:57:48.039166 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xm5pw" event={"ID":"f8806871-0851-409e-a084-d83cec432132","Type":"ContainerDied","Data":"83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b"} Nov 28 21:57:48 crc kubenswrapper[4957]: I1128 21:57:48.039634 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xm5pw" event={"ID":"f8806871-0851-409e-a084-d83cec432132","Type":"ContainerStarted","Data":"bf2694c0b4a43033268d79e9bcaf2a92c36b32582035df2fc2f451f1f24fede5"} Nov 28 21:57:50 crc kubenswrapper[4957]: I1128 21:57:50.063168 4957 generic.go:334] "Generic (PLEG): container finished" podID="f8806871-0851-409e-a084-d83cec432132" containerID="4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82" exitCode=0 Nov 28 21:57:50 crc kubenswrapper[4957]: I1128 21:57:50.063258 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xm5pw" event={"ID":"f8806871-0851-409e-a084-d83cec432132","Type":"ContainerDied","Data":"4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82"} Nov 28 21:57:51 crc kubenswrapper[4957]: I1128 21:57:51.077289 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xm5pw" event={"ID":"f8806871-0851-409e-a084-d83cec432132","Type":"ContainerStarted","Data":"0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1"} Nov 28 21:57:51 crc kubenswrapper[4957]: I1128 21:57:51.102198 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xm5pw" podStartSLOduration=3.628277484 podStartE2EDuration="6.102176465s" podCreationTimestamp="2025-11-28 21:57:45 +0000 UTC" firstStartedPulling="2025-11-28 21:57:48.041438772 +0000 UTC m=+4107.510086681" lastFinishedPulling="2025-11-28 21:57:50.515337753 +0000 UTC m=+4109.983985662" observedRunningTime="2025-11-28 21:57:51.093332046 +0000 UTC m=+4110.561979975" watchObservedRunningTime="2025-11-28 21:57:51.102176465 +0000 UTC m=+4110.570824374" Nov 28 21:57:51 crc kubenswrapper[4957]: I1128 21:57:51.813468 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:57:51 crc kubenswrapper[4957]: E1128 21:57:51.813926 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:57:56 crc kubenswrapper[4957]: I1128 21:57:56.060516 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:56 crc kubenswrapper[4957]: I1128 21:57:56.061128 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:56 crc kubenswrapper[4957]: I1128 21:57:56.128139 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:56 crc kubenswrapper[4957]: I1128 21:57:56.197400 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:56 crc kubenswrapper[4957]: I1128 21:57:56.375424 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xm5pw"] Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.173592 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xm5pw" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="registry-server" containerID="cri-o://0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1" gracePeriod=2 Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.647878 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.811708 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b97xp\" (UniqueName: \"kubernetes.io/projected/f8806871-0851-409e-a084-d83cec432132-kube-api-access-b97xp\") pod \"f8806871-0851-409e-a084-d83cec432132\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.811761 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-utilities\") pod \"f8806871-0851-409e-a084-d83cec432132\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.811914 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-catalog-content\") pod \"f8806871-0851-409e-a084-d83cec432132\" (UID: \"f8806871-0851-409e-a084-d83cec432132\") " Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.812552 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-utilities" (OuterVolumeSpecName: "utilities") pod "f8806871-0851-409e-a084-d83cec432132" (UID: "f8806871-0851-409e-a084-d83cec432132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.813051 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.818146 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8806871-0851-409e-a084-d83cec432132-kube-api-access-b97xp" (OuterVolumeSpecName: "kube-api-access-b97xp") pod "f8806871-0851-409e-a084-d83cec432132" (UID: "f8806871-0851-409e-a084-d83cec432132"). InnerVolumeSpecName "kube-api-access-b97xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.832501 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8806871-0851-409e-a084-d83cec432132" (UID: "f8806871-0851-409e-a084-d83cec432132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.916138 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8806871-0851-409e-a084-d83cec432132-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 21:57:58 crc kubenswrapper[4957]: I1128 21:57:58.916195 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b97xp\" (UniqueName: \"kubernetes.io/projected/f8806871-0851-409e-a084-d83cec432132-kube-api-access-b97xp\") on node \"crc\" DevicePath \"\"" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.186599 4957 generic.go:334] "Generic (PLEG): container finished" podID="f8806871-0851-409e-a084-d83cec432132" containerID="0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1" exitCode=0 Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.186663 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xm5pw" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.186686 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xm5pw" event={"ID":"f8806871-0851-409e-a084-d83cec432132","Type":"ContainerDied","Data":"0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1"} Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.187069 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xm5pw" event={"ID":"f8806871-0851-409e-a084-d83cec432132","Type":"ContainerDied","Data":"bf2694c0b4a43033268d79e9bcaf2a92c36b32582035df2fc2f451f1f24fede5"} Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.187108 4957 scope.go:117] "RemoveContainer" containerID="0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.212091 4957 scope.go:117] "RemoveContainer" containerID="4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.222803 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xm5pw"] Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.236350 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xm5pw"] Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.250893 4957 scope.go:117] "RemoveContainer" containerID="83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.294746 4957 scope.go:117] "RemoveContainer" containerID="0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1" Nov 28 21:57:59 crc kubenswrapper[4957]: E1128 21:57:59.295275 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1\": container with ID starting with 0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1 not found: ID does not exist" containerID="0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.295314 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1"} err="failed to get container status \"0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1\": rpc error: code = NotFound desc = could not find container \"0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1\": container with ID starting with 0e9bed16a436fa9c92020d06891a9fef20d22544dbf2f643f5cdc2ac309528f1 not found: ID does not exist" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.295340 4957 scope.go:117] "RemoveContainer" containerID="4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82" Nov 28 21:57:59 crc kubenswrapper[4957]: E1128 21:57:59.296107 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82\": container with ID starting with 4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82 not found: ID does not exist" containerID="4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.296138 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82"} err="failed to get container status \"4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82\": rpc error: code = NotFound desc = could not find container \"4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82\": container with ID starting with 4cc1d25d34f98648630f2dd7ab2bc6208fdeb03c7391dfa2b4f530e924d98b82 not found: ID does not exist" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.296155 4957 scope.go:117] "RemoveContainer" containerID="83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b" Nov 28 21:57:59 crc kubenswrapper[4957]: E1128 21:57:59.296494 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b\": container with ID starting with 83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b not found: ID does not exist" containerID="83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b" Nov 28 21:57:59 crc kubenswrapper[4957]: I1128 21:57:59.296515 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b"} err="failed to get container status \"83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b\": rpc error: code = NotFound desc = could not find container \"83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b\": container with ID starting with 83c00f6fb73e73f724d52f2ec7e9373e5689487a99fc4ed7e4975738c9d0d53b not found: ID does not exist" Nov 28 21:58:00 crc kubenswrapper[4957]: I1128 21:58:00.831838 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8806871-0851-409e-a084-d83cec432132" path="/var/lib/kubelet/pods/f8806871-0851-409e-a084-d83cec432132/volumes" Nov 28 21:58:04 crc kubenswrapper[4957]: I1128 21:58:04.813766 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:58:04 crc kubenswrapper[4957]: E1128 21:58:04.814286 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:58:16 crc kubenswrapper[4957]: I1128 21:58:16.813797 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:58:16 crc kubenswrapper[4957]: E1128 21:58:16.814713 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:58:31 crc kubenswrapper[4957]: I1128 21:58:31.813636 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:58:31 crc kubenswrapper[4957]: E1128 21:58:31.814773 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 21:58:46 crc kubenswrapper[4957]: I1128 21:58:46.813308 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 21:58:47 crc kubenswrapper[4957]: I1128 21:58:47.708969 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"1a8421570eed21934a04a72de5086b7695217ce013842f4d6a6431feab9aabff"} Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.184721 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w"] Nov 28 22:00:00 crc kubenswrapper[4957]: E1128 22:00:00.186135 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="extract-content" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.186160 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="extract-content" Nov 28 22:00:00 crc kubenswrapper[4957]: E1128 22:00:00.186313 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="registry-server" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.186333 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="registry-server" Nov 28 22:00:00 crc kubenswrapper[4957]: E1128 22:00:00.186348 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="extract-utilities" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.186359 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="extract-utilities" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.186778 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8806871-0851-409e-a084-d83cec432132" containerName="registry-server" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.188107 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.190913 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.191326 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.197381 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w"] Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.222578 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-secret-volume\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.223241 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkk5x\" (UniqueName: \"kubernetes.io/projected/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-kube-api-access-lkk5x\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.223524 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-config-volume\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.324827 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-secret-volume\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.324904 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkk5x\" (UniqueName: \"kubernetes.io/projected/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-kube-api-access-lkk5x\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.324971 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-config-volume\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.326090 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-config-volume\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.343649 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-secret-volume\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.345181 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkk5x\" (UniqueName: \"kubernetes.io/projected/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-kube-api-access-lkk5x\") pod \"collect-profiles-29406120-ctv2w\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:00 crc kubenswrapper[4957]: I1128 22:00:00.518196 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:01 crc kubenswrapper[4957]: I1128 22:00:01.015045 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w"] Nov 28 22:00:01 crc kubenswrapper[4957]: I1128 22:00:01.517477 4957 generic.go:334] "Generic (PLEG): container finished" podID="dcdc6630-d96c-4f3c-a2c6-6e304141de0a" containerID="34532db8d1e36fcb92be65f99aa36ef631ad79c9d272ef9f66eb66a2840e58db" exitCode=0 Nov 28 22:00:01 crc kubenswrapper[4957]: I1128 22:00:01.517543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" event={"ID":"dcdc6630-d96c-4f3c-a2c6-6e304141de0a","Type":"ContainerDied","Data":"34532db8d1e36fcb92be65f99aa36ef631ad79c9d272ef9f66eb66a2840e58db"} Nov 28 22:00:01 crc kubenswrapper[4957]: I1128 22:00:01.517805 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" event={"ID":"dcdc6630-d96c-4f3c-a2c6-6e304141de0a","Type":"ContainerStarted","Data":"e8222acaf62b76e319d4ad0325ecae74d6df7be78fabfc8f52261f8e3e69e2c1"} Nov 28 22:00:02 crc kubenswrapper[4957]: I1128 22:00:02.953488 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.090476 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-config-volume\") pod \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.090581 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkk5x\" (UniqueName: \"kubernetes.io/projected/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-kube-api-access-lkk5x\") pod \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.090763 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-secret-volume\") pod \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\" (UID: \"dcdc6630-d96c-4f3c-a2c6-6e304141de0a\") " Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.091431 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "dcdc6630-d96c-4f3c-a2c6-6e304141de0a" (UID: "dcdc6630-d96c-4f3c-a2c6-6e304141de0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.097384 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-kube-api-access-lkk5x" (OuterVolumeSpecName: "kube-api-access-lkk5x") pod "dcdc6630-d96c-4f3c-a2c6-6e304141de0a" (UID: "dcdc6630-d96c-4f3c-a2c6-6e304141de0a"). InnerVolumeSpecName "kube-api-access-lkk5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.097405 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dcdc6630-d96c-4f3c-a2c6-6e304141de0a" (UID: "dcdc6630-d96c-4f3c-a2c6-6e304141de0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.193686 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.193715 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkk5x\" (UniqueName: \"kubernetes.io/projected/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-kube-api-access-lkk5x\") on node \"crc\" DevicePath \"\"" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.193726 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdc6630-d96c-4f3c-a2c6-6e304141de0a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.539386 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" event={"ID":"dcdc6630-d96c-4f3c-a2c6-6e304141de0a","Type":"ContainerDied","Data":"e8222acaf62b76e319d4ad0325ecae74d6df7be78fabfc8f52261f8e3e69e2c1"} Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.539652 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8222acaf62b76e319d4ad0325ecae74d6df7be78fabfc8f52261f8e3e69e2c1" Nov 28 22:00:03 crc kubenswrapper[4957]: I1128 22:00:03.539515 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w" Nov 28 22:00:04 crc kubenswrapper[4957]: I1128 22:00:04.032247 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98"] Nov 28 22:00:04 crc kubenswrapper[4957]: I1128 22:00:04.046114 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406075-pnn98"] Nov 28 22:00:04 crc kubenswrapper[4957]: I1128 22:00:04.829119 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877be3f7-c2ac-4682-87ca-10538b1a5973" path="/var/lib/kubelet/pods/877be3f7-c2ac-4682-87ca-10538b1a5973/volumes" Nov 28 22:00:29 crc kubenswrapper[4957]: I1128 22:00:29.903023 4957 scope.go:117] "RemoveContainer" containerID="daf13cfafed9c25376dd2f3de21d6d5bf0d62e72936f3fed272627a6b77294a4" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.149422 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29406121-b8vw6"] Nov 28 22:01:00 crc kubenswrapper[4957]: E1128 22:01:00.150425 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdc6630-d96c-4f3c-a2c6-6e304141de0a" containerName="collect-profiles" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.150441 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdc6630-d96c-4f3c-a2c6-6e304141de0a" containerName="collect-profiles" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.150692 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdc6630-d96c-4f3c-a2c6-6e304141de0a" containerName="collect-profiles" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.151496 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.161004 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406121-b8vw6"] Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.238128 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-fernet-keys\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.238521 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44rq2\" (UniqueName: \"kubernetes.io/projected/010a05b8-93e0-4601-9d1d-f865737b9230-kube-api-access-44rq2\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.238556 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-combined-ca-bundle\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.238624 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-config-data\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.340723 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44rq2\" (UniqueName: \"kubernetes.io/projected/010a05b8-93e0-4601-9d1d-f865737b9230-kube-api-access-44rq2\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.340773 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-combined-ca-bundle\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.340843 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-config-data\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.340906 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-fernet-keys\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.346955 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-combined-ca-bundle\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.347020 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-fernet-keys\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.347345 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-config-data\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.367287 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44rq2\" (UniqueName: \"kubernetes.io/projected/010a05b8-93e0-4601-9d1d-f865737b9230-kube-api-access-44rq2\") pod \"keystone-cron-29406121-b8vw6\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.478807 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:00 crc kubenswrapper[4957]: I1128 22:01:00.954285 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29406121-b8vw6"] Nov 28 22:01:01 crc kubenswrapper[4957]: I1128 22:01:01.046892 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406121-b8vw6" event={"ID":"010a05b8-93e0-4601-9d1d-f865737b9230","Type":"ContainerStarted","Data":"015794a6348b6a4b901f10e60a5aea2f8dfcb278ce4bb7cd3873c745717b9486"} Nov 28 22:01:02 crc kubenswrapper[4957]: I1128 22:01:02.056732 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406121-b8vw6" event={"ID":"010a05b8-93e0-4601-9d1d-f865737b9230","Type":"ContainerStarted","Data":"821bb1a657f062a50a3f7330a781479654a24a16cfa3cd84dd9b4fb590cf66cf"} Nov 28 22:01:02 crc kubenswrapper[4957]: I1128 22:01:02.074756 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29406121-b8vw6" podStartSLOduration=2.074735009 podStartE2EDuration="2.074735009s" podCreationTimestamp="2025-11-28 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 22:01:02.070444222 +0000 UTC m=+4301.539092131" watchObservedRunningTime="2025-11-28 22:01:02.074735009 +0000 UTC m=+4301.543382918" Nov 28 22:01:04 crc kubenswrapper[4957]: I1128 22:01:04.078799 4957 generic.go:334] "Generic (PLEG): container finished" podID="010a05b8-93e0-4601-9d1d-f865737b9230" containerID="821bb1a657f062a50a3f7330a781479654a24a16cfa3cd84dd9b4fb590cf66cf" exitCode=0 Nov 28 22:01:04 crc kubenswrapper[4957]: I1128 22:01:04.079114 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406121-b8vw6" event={"ID":"010a05b8-93e0-4601-9d1d-f865737b9230","Type":"ContainerDied","Data":"821bb1a657f062a50a3f7330a781479654a24a16cfa3cd84dd9b4fb590cf66cf"} Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.493968 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.574989 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-fernet-keys\") pod \"010a05b8-93e0-4601-9d1d-f865737b9230\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.575044 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44rq2\" (UniqueName: \"kubernetes.io/projected/010a05b8-93e0-4601-9d1d-f865737b9230-kube-api-access-44rq2\") pod \"010a05b8-93e0-4601-9d1d-f865737b9230\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.575306 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-config-data\") pod \"010a05b8-93e0-4601-9d1d-f865737b9230\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.575371 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-combined-ca-bundle\") pod \"010a05b8-93e0-4601-9d1d-f865737b9230\" (UID: \"010a05b8-93e0-4601-9d1d-f865737b9230\") " Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.580902 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "010a05b8-93e0-4601-9d1d-f865737b9230" (UID: "010a05b8-93e0-4601-9d1d-f865737b9230"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.581020 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010a05b8-93e0-4601-9d1d-f865737b9230-kube-api-access-44rq2" (OuterVolumeSpecName: "kube-api-access-44rq2") pod "010a05b8-93e0-4601-9d1d-f865737b9230" (UID: "010a05b8-93e0-4601-9d1d-f865737b9230"). InnerVolumeSpecName "kube-api-access-44rq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.608894 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "010a05b8-93e0-4601-9d1d-f865737b9230" (UID: "010a05b8-93e0-4601-9d1d-f865737b9230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.633866 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-config-data" (OuterVolumeSpecName: "config-data") pod "010a05b8-93e0-4601-9d1d-f865737b9230" (UID: "010a05b8-93e0-4601-9d1d-f865737b9230"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.678441 4957 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.678475 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44rq2\" (UniqueName: \"kubernetes.io/projected/010a05b8-93e0-4601-9d1d-f865737b9230-kube-api-access-44rq2\") on node \"crc\" DevicePath \"\"" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.678486 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 22:01:05 crc kubenswrapper[4957]: I1128 22:01:05.678495 4957 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010a05b8-93e0-4601-9d1d-f865737b9230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 22:01:06 crc kubenswrapper[4957]: I1128 22:01:06.103698 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29406121-b8vw6" event={"ID":"010a05b8-93e0-4601-9d1d-f865737b9230","Type":"ContainerDied","Data":"015794a6348b6a4b901f10e60a5aea2f8dfcb278ce4bb7cd3873c745717b9486"} Nov 28 22:01:06 crc kubenswrapper[4957]: I1128 22:01:06.104030 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="015794a6348b6a4b901f10e60a5aea2f8dfcb278ce4bb7cd3873c745717b9486" Nov 28 22:01:06 crc kubenswrapper[4957]: I1128 22:01:06.103836 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29406121-b8vw6" Nov 28 22:01:08 crc kubenswrapper[4957]: I1128 22:01:08.992394 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:01:08 crc kubenswrapper[4957]: I1128 22:01:08.993086 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:01:38 crc kubenswrapper[4957]: I1128 22:01:38.992333 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:01:38 crc kubenswrapper[4957]: I1128 22:01:38.992886 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:02:08 crc kubenswrapper[4957]: I1128 22:02:08.992008 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:02:08 crc kubenswrapper[4957]: I1128 22:02:08.992665 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:02:08 crc kubenswrapper[4957]: I1128 22:02:08.992719 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:02:08 crc kubenswrapper[4957]: I1128 22:02:08.993687 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a8421570eed21934a04a72de5086b7695217ce013842f4d6a6431feab9aabff"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:02:08 crc kubenswrapper[4957]: I1128 22:02:08.993769 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://1a8421570eed21934a04a72de5086b7695217ce013842f4d6a6431feab9aabff" gracePeriod=600 Nov 28 22:02:09 crc kubenswrapper[4957]: I1128 22:02:09.764454 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="1a8421570eed21934a04a72de5086b7695217ce013842f4d6a6431feab9aabff" exitCode=0 Nov 28 22:02:09 crc kubenswrapper[4957]: I1128 22:02:09.764516 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"1a8421570eed21934a04a72de5086b7695217ce013842f4d6a6431feab9aabff"} Nov 28 22:02:09 crc kubenswrapper[4957]: I1128 22:02:09.765230 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2"} Nov 28 22:02:09 crc kubenswrapper[4957]: I1128 22:02:09.765250 4957 scope.go:117] "RemoveContainer" containerID="10923b576cbc05cefdcda48a33e16a3e54673c59eba4994d5974e17ac56fbcc7" Nov 28 22:04:38 crc kubenswrapper[4957]: I1128 22:04:38.993278 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:04:38 crc kubenswrapper[4957]: I1128 22:04:38.994030 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:04:39 crc kubenswrapper[4957]: I1128 22:04:39.991067 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ks9lr"] Nov 28 22:04:39 crc kubenswrapper[4957]: E1128 22:04:39.991653 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010a05b8-93e0-4601-9d1d-f865737b9230" containerName="keystone-cron" Nov 28 22:04:39 crc kubenswrapper[4957]: I1128 22:04:39.991671 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="010a05b8-93e0-4601-9d1d-f865737b9230" containerName="keystone-cron" Nov 28 22:04:39 crc kubenswrapper[4957]: I1128 22:04:39.991885 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="010a05b8-93e0-4601-9d1d-f865737b9230" containerName="keystone-cron" Nov 28 22:04:39 crc kubenswrapper[4957]: I1128 22:04:39.993653 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.004330 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ks9lr"] Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.075006 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-catalog-content\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.075052 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-utilities\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.075105 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8v2\" (UniqueName: \"kubernetes.io/projected/a9251d2f-5539-42bb-b57c-6e5099f9ed46-kube-api-access-4g8v2\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.177296 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-catalog-content\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.177613 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-utilities\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.177670 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g8v2\" (UniqueName: \"kubernetes.io/projected/a9251d2f-5539-42bb-b57c-6e5099f9ed46-kube-api-access-4g8v2\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.177980 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-catalog-content\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.178015 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-utilities\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.199843 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g8v2\" (UniqueName: \"kubernetes.io/projected/a9251d2f-5539-42bb-b57c-6e5099f9ed46-kube-api-access-4g8v2\") pod \"redhat-operators-ks9lr\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.329278 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:40 crc kubenswrapper[4957]: I1128 22:04:40.840383 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ks9lr"] Nov 28 22:04:41 crc kubenswrapper[4957]: E1128 22:04:41.363490 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9251d2f_5539_42bb_b57c_6e5099f9ed46.slice/crio-d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9251d2f_5539_42bb_b57c_6e5099f9ed46.slice/crio-conmon-d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a.scope\": RecentStats: unable to find data in memory cache]" Nov 28 22:04:41 crc kubenswrapper[4957]: I1128 22:04:41.714548 4957 generic.go:334] "Generic (PLEG): container finished" podID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerID="d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a" exitCode=0 Nov 28 22:04:41 crc kubenswrapper[4957]: I1128 22:04:41.714806 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerDied","Data":"d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a"} Nov 28 22:04:41 crc kubenswrapper[4957]: I1128 22:04:41.714830 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerStarted","Data":"d3557d4f6ef84a40d97bfbc9d8fad14ef59b79fd5c9ae6b3b7d41ab7d20fec05"} Nov 28 22:04:41 crc kubenswrapper[4957]: I1128 22:04:41.718630 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 22:04:43 crc kubenswrapper[4957]: I1128 22:04:43.739259 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerStarted","Data":"52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed"} Nov 28 22:04:45 crc kubenswrapper[4957]: I1128 22:04:45.761991 4957 generic.go:334] "Generic (PLEG): container finished" podID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerID="52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed" exitCode=0 Nov 28 22:04:45 crc kubenswrapper[4957]: I1128 22:04:45.762034 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerDied","Data":"52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed"} Nov 28 22:04:46 crc kubenswrapper[4957]: I1128 22:04:46.776940 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerStarted","Data":"ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481"} Nov 28 22:04:46 crc kubenswrapper[4957]: I1128 22:04:46.807142 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ks9lr" podStartSLOduration=3.312391898 podStartE2EDuration="7.807116925s" podCreationTimestamp="2025-11-28 22:04:39 +0000 UTC" firstStartedPulling="2025-11-28 22:04:41.71836558 +0000 UTC m=+4521.187013489" lastFinishedPulling="2025-11-28 22:04:46.213090607 +0000 UTC m=+4525.681738516" observedRunningTime="2025-11-28 22:04:46.799671801 +0000 UTC m=+4526.268319710" watchObservedRunningTime="2025-11-28 22:04:46.807116925 +0000 UTC m=+4526.275764834" Nov 28 22:04:50 crc kubenswrapper[4957]: I1128 22:04:50.330304 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:50 crc kubenswrapper[4957]: I1128 22:04:50.330832 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:04:51 crc kubenswrapper[4957]: I1128 22:04:51.670937 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ks9lr" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="registry-server" probeResult="failure" output=< Nov 28 22:04:51 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 22:04:51 crc kubenswrapper[4957]: > Nov 28 22:05:00 crc kubenswrapper[4957]: I1128 22:05:00.631563 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:05:00 crc kubenswrapper[4957]: I1128 22:05:00.685959 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:05:00 crc kubenswrapper[4957]: I1128 22:05:00.869097 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ks9lr"] Nov 28 22:05:01 crc kubenswrapper[4957]: I1128 22:05:01.952789 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ks9lr" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="registry-server" containerID="cri-o://ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481" gracePeriod=2 Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.518119 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.590369 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8v2\" (UniqueName: \"kubernetes.io/projected/a9251d2f-5539-42bb-b57c-6e5099f9ed46-kube-api-access-4g8v2\") pod \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.590439 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-catalog-content\") pod \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.590479 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-utilities\") pod \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\" (UID: \"a9251d2f-5539-42bb-b57c-6e5099f9ed46\") " Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.592567 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-utilities" (OuterVolumeSpecName: "utilities") pod "a9251d2f-5539-42bb-b57c-6e5099f9ed46" (UID: "a9251d2f-5539-42bb-b57c-6e5099f9ed46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.597662 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9251d2f-5539-42bb-b57c-6e5099f9ed46-kube-api-access-4g8v2" (OuterVolumeSpecName: "kube-api-access-4g8v2") pod "a9251d2f-5539-42bb-b57c-6e5099f9ed46" (UID: "a9251d2f-5539-42bb-b57c-6e5099f9ed46"). InnerVolumeSpecName "kube-api-access-4g8v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.704064 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g8v2\" (UniqueName: \"kubernetes.io/projected/a9251d2f-5539-42bb-b57c-6e5099f9ed46-kube-api-access-4g8v2\") on node \"crc\" DevicePath \"\"" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.704385 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.715604 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9251d2f-5539-42bb-b57c-6e5099f9ed46" (UID: "a9251d2f-5539-42bb-b57c-6e5099f9ed46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.809418 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9251d2f-5539-42bb-b57c-6e5099f9ed46-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.969342 4957 generic.go:334] "Generic (PLEG): container finished" podID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerID="ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481" exitCode=0 Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.969404 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ks9lr" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.969404 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerDied","Data":"ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481"} Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.969461 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ks9lr" event={"ID":"a9251d2f-5539-42bb-b57c-6e5099f9ed46","Type":"ContainerDied","Data":"d3557d4f6ef84a40d97bfbc9d8fad14ef59b79fd5c9ae6b3b7d41ab7d20fec05"} Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.969480 4957 scope.go:117] "RemoveContainer" containerID="ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.995487 4957 scope.go:117] "RemoveContainer" containerID="52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed" Nov 28 22:05:02 crc kubenswrapper[4957]: I1128 22:05:02.997526 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ks9lr"] Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.008871 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ks9lr"] Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.021112 4957 scope.go:117] "RemoveContainer" containerID="d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a" Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.068080 4957 scope.go:117] "RemoveContainer" containerID="ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481" Nov 28 22:05:03 crc kubenswrapper[4957]: E1128 22:05:03.068636 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481\": container with ID starting with ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481 not found: ID does not exist" containerID="ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481" Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.068672 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481"} err="failed to get container status \"ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481\": rpc error: code = NotFound desc = could not find container \"ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481\": container with ID starting with ab445d62d8c4ca6c4a4401ac1f5f9d62c87d68e59fa18fc96c2f7890d24ce481 not found: ID does not exist" Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.068694 4957 scope.go:117] "RemoveContainer" containerID="52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed" Nov 28 22:05:03 crc kubenswrapper[4957]: E1128 22:05:03.068981 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed\": container with ID starting with 52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed not found: ID does not exist" containerID="52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed" Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.069003 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed"} err="failed to get container status \"52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed\": rpc error: code = NotFound desc = could not find container \"52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed\": container with ID starting with 52ab4bc0f25cdf01061d9e682c08fa03ac0c7e8f4ffbf4ab4c3a4c0b85b814ed not found: ID does not exist" Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.069016 4957 scope.go:117] "RemoveContainer" containerID="d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a" Nov 28 22:05:03 crc kubenswrapper[4957]: E1128 22:05:03.069402 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a\": container with ID starting with d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a not found: ID does not exist" containerID="d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a" Nov 28 22:05:03 crc kubenswrapper[4957]: I1128 22:05:03.069502 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a"} err="failed to get container status \"d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a\": rpc error: code = NotFound desc = could not find container \"d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a\": container with ID starting with d34e17e95f371eb6ec5c00a8860461692122b4c0cb97362367d2946a4efdca4a not found: ID does not exist" Nov 28 22:05:04 crc kubenswrapper[4957]: I1128 22:05:04.829040 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" path="/var/lib/kubelet/pods/a9251d2f-5539-42bb-b57c-6e5099f9ed46/volumes" Nov 28 22:05:07 crc kubenswrapper[4957]: E1128 22:05:07.165883 4957 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.111:56798->38.102.83.111:40891: read tcp 38.102.83.111:56798->38.102.83.111:40891: read: connection reset by peer Nov 28 22:05:08 crc kubenswrapper[4957]: I1128 22:05:08.992621 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:05:08 crc kubenswrapper[4957]: I1128 22:05:08.993001 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:05:38 crc kubenswrapper[4957]: I1128 22:05:38.992033 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:05:38 crc kubenswrapper[4957]: I1128 22:05:38.992571 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:05:38 crc kubenswrapper[4957]: I1128 22:05:38.992620 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:05:38 crc kubenswrapper[4957]: I1128 22:05:38.993496 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:05:38 crc kubenswrapper[4957]: I1128 22:05:38.993550 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" gracePeriod=600 Nov 28 22:05:39 crc kubenswrapper[4957]: E1128 22:05:39.115355 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:05:39 crc kubenswrapper[4957]: I1128 22:05:39.379508 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" exitCode=0 Nov 28 22:05:39 crc kubenswrapper[4957]: I1128 22:05:39.379543 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2"} Nov 28 22:05:39 crc kubenswrapper[4957]: I1128 22:05:39.379600 4957 scope.go:117] "RemoveContainer" containerID="1a8421570eed21934a04a72de5086b7695217ce013842f4d6a6431feab9aabff" Nov 28 22:05:39 crc kubenswrapper[4957]: I1128 22:05:39.380431 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:05:39 crc kubenswrapper[4957]: E1128 22:05:39.380758 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:05:52 crc kubenswrapper[4957]: I1128 22:05:52.814152 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:05:52 crc kubenswrapper[4957]: E1128 22:05:52.815352 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:06:05 crc kubenswrapper[4957]: I1128 22:06:05.813524 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:06:05 crc kubenswrapper[4957]: E1128 22:06:05.814310 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:06:18 crc kubenswrapper[4957]: I1128 22:06:18.813751 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:06:18 crc kubenswrapper[4957]: E1128 22:06:18.814491 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:06:32 crc kubenswrapper[4957]: I1128 22:06:32.812922 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:06:32 crc kubenswrapper[4957]: E1128 22:06:32.814524 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:06:46 crc kubenswrapper[4957]: I1128 22:06:46.813354 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:06:46 crc kubenswrapper[4957]: E1128 22:06:46.814196 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.673877 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jn7c4"] Nov 28 22:06:54 crc kubenswrapper[4957]: E1128 22:06:54.674988 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="extract-utilities" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.675002 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="extract-utilities" Nov 28 22:06:54 crc kubenswrapper[4957]: E1128 22:06:54.675022 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="extract-content" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.675030 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="extract-content" Nov 28 22:06:54 crc kubenswrapper[4957]: E1128 22:06:54.675050 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="registry-server" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.675056 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="registry-server" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.675318 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9251d2f-5539-42bb-b57c-6e5099f9ed46" containerName="registry-server" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.677180 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.689680 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn7c4"] Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.777186 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-catalog-content\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.777334 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhpb\" (UniqueName: \"kubernetes.io/projected/1c534667-9488-471d-8421-2b31d5c7c04b-kube-api-access-tkhpb\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.777393 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-utilities\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.879473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-catalog-content\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.879559 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkhpb\" (UniqueName: \"kubernetes.io/projected/1c534667-9488-471d-8421-2b31d5c7c04b-kube-api-access-tkhpb\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.879593 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-utilities\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.880064 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-utilities\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.880090 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-catalog-content\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.898316 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkhpb\" (UniqueName: \"kubernetes.io/projected/1c534667-9488-471d-8421-2b31d5c7c04b-kube-api-access-tkhpb\") pod \"community-operators-jn7c4\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:54 crc kubenswrapper[4957]: I1128 22:06:54.999492 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:06:55 crc kubenswrapper[4957]: I1128 22:06:55.504273 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn7c4"] Nov 28 22:06:55 crc kubenswrapper[4957]: W1128 22:06:55.506853 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c534667_9488_471d_8421_2b31d5c7c04b.slice/crio-7dcd8013bf93b7803e50520f9cbfc05b38fd372ab381c578ddb5c0e1748b11bc WatchSource:0}: Error finding container 7dcd8013bf93b7803e50520f9cbfc05b38fd372ab381c578ddb5c0e1748b11bc: Status 404 returned error can't find the container with id 7dcd8013bf93b7803e50520f9cbfc05b38fd372ab381c578ddb5c0e1748b11bc Nov 28 22:06:56 crc kubenswrapper[4957]: I1128 22:06:56.326496 4957 generic.go:334] "Generic (PLEG): container finished" podID="1c534667-9488-471d-8421-2b31d5c7c04b" containerID="6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007" exitCode=0 Nov 28 22:06:56 crc kubenswrapper[4957]: I1128 22:06:56.326606 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerDied","Data":"6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007"} Nov 28 22:06:56 crc kubenswrapper[4957]: I1128 22:06:56.326859 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerStarted","Data":"7dcd8013bf93b7803e50520f9cbfc05b38fd372ab381c578ddb5c0e1748b11bc"} Nov 28 22:06:57 crc kubenswrapper[4957]: I1128 22:06:57.339136 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerStarted","Data":"1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251"} Nov 28 22:06:57 crc kubenswrapper[4957]: I1128 22:06:57.814630 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:06:57 crc kubenswrapper[4957]: E1128 22:06:57.815938 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:06:58 crc kubenswrapper[4957]: I1128 22:06:58.354393 4957 generic.go:334] "Generic (PLEG): container finished" podID="1c534667-9488-471d-8421-2b31d5c7c04b" containerID="1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251" exitCode=0 Nov 28 22:06:58 crc kubenswrapper[4957]: I1128 22:06:58.354458 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerDied","Data":"1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251"} Nov 28 22:06:59 crc kubenswrapper[4957]: I1128 22:06:59.367187 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerStarted","Data":"5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf"} Nov 28 22:06:59 crc kubenswrapper[4957]: I1128 22:06:59.395868 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jn7c4" podStartSLOduration=2.944929332 podStartE2EDuration="5.395846434s" podCreationTimestamp="2025-11-28 22:06:54 +0000 UTC" firstStartedPulling="2025-11-28 22:06:56.328789655 +0000 UTC m=+4655.797437554" lastFinishedPulling="2025-11-28 22:06:58.779706747 +0000 UTC m=+4658.248354656" observedRunningTime="2025-11-28 22:06:59.386996415 +0000 UTC m=+4658.855644324" watchObservedRunningTime="2025-11-28 22:06:59.395846434 +0000 UTC m=+4658.864494343" Nov 28 22:07:05 crc kubenswrapper[4957]: I1128 22:07:05.000366 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:07:05 crc kubenswrapper[4957]: I1128 22:07:05.000977 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:07:05 crc kubenswrapper[4957]: I1128 22:07:05.838685 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:07:05 crc kubenswrapper[4957]: I1128 22:07:05.899567 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:07:06 crc kubenswrapper[4957]: I1128 22:07:06.079104 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn7c4"] Nov 28 22:07:07 crc kubenswrapper[4957]: I1128 22:07:07.462683 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jn7c4" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="registry-server" containerID="cri-o://5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf" gracePeriod=2 Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.191794 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.297790 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkhpb\" (UniqueName: \"kubernetes.io/projected/1c534667-9488-471d-8421-2b31d5c7c04b-kube-api-access-tkhpb\") pod \"1c534667-9488-471d-8421-2b31d5c7c04b\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.298098 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-catalog-content\") pod \"1c534667-9488-471d-8421-2b31d5c7c04b\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.298358 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-utilities\") pod \"1c534667-9488-471d-8421-2b31d5c7c04b\" (UID: \"1c534667-9488-471d-8421-2b31d5c7c04b\") " Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.299619 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-utilities" (OuterVolumeSpecName: "utilities") pod "1c534667-9488-471d-8421-2b31d5c7c04b" (UID: "1c534667-9488-471d-8421-2b31d5c7c04b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.302428 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c534667-9488-471d-8421-2b31d5c7c04b-kube-api-access-tkhpb" (OuterVolumeSpecName: "kube-api-access-tkhpb") pod "1c534667-9488-471d-8421-2b31d5c7c04b" (UID: "1c534667-9488-471d-8421-2b31d5c7c04b"). InnerVolumeSpecName "kube-api-access-tkhpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.350163 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c534667-9488-471d-8421-2b31d5c7c04b" (UID: "1c534667-9488-471d-8421-2b31d5c7c04b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.400926 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.401350 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c534667-9488-471d-8421-2b31d5c7c04b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.401421 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkhpb\" (UniqueName: \"kubernetes.io/projected/1c534667-9488-471d-8421-2b31d5c7c04b-kube-api-access-tkhpb\") on node \"crc\" DevicePath \"\"" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.479157 4957 generic.go:334] "Generic (PLEG): container finished" podID="1c534667-9488-471d-8421-2b31d5c7c04b" containerID="5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf" exitCode=0 Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.479256 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerDied","Data":"5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf"} Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.479310 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn7c4" event={"ID":"1c534667-9488-471d-8421-2b31d5c7c04b","Type":"ContainerDied","Data":"7dcd8013bf93b7803e50520f9cbfc05b38fd372ab381c578ddb5c0e1748b11bc"} Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.479333 4957 scope.go:117] "RemoveContainer" containerID="5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.480636 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn7c4" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.508387 4957 scope.go:117] "RemoveContainer" containerID="1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.518022 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn7c4"] Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.528917 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jn7c4"] Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.549712 4957 scope.go:117] "RemoveContainer" containerID="6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.595086 4957 scope.go:117] "RemoveContainer" containerID="5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf" Nov 28 22:07:08 crc kubenswrapper[4957]: E1128 22:07:08.595724 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf\": container with ID starting with 5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf not found: ID does not exist" containerID="5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.595768 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf"} err="failed to get container status \"5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf\": rpc error: code = NotFound desc = could not find container \"5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf\": container with ID starting with 5383996c5a29a72d0337871b2e0b5cceda6f529d189521c37fd3c72163a4d9bf not found: ID does not exist" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.595796 4957 scope.go:117] "RemoveContainer" containerID="1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251" Nov 28 22:07:08 crc kubenswrapper[4957]: E1128 22:07:08.596142 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251\": container with ID starting with 1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251 not found: ID does not exist" containerID="1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.596185 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251"} err="failed to get container status \"1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251\": rpc error: code = NotFound desc = could not find container \"1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251\": container with ID starting with 1c886894b7859bf1645b76ae90f7262729c43702d4f47509d0cacd8f0bee5251 not found: ID does not exist" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.596243 4957 scope.go:117] "RemoveContainer" containerID="6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007" Nov 28 22:07:08 crc kubenswrapper[4957]: E1128 22:07:08.596604 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007\": container with ID starting with 6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007 not found: ID does not exist" containerID="6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.596631 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007"} err="failed to get container status \"6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007\": rpc error: code = NotFound desc = could not find container \"6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007\": container with ID starting with 6a691e105e03dc30b9d8d2e4d121e1523b42e3277e5302001cb982ae34225007 not found: ID does not exist" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.813336 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:07:08 crc kubenswrapper[4957]: E1128 22:07:08.813786 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:07:08 crc kubenswrapper[4957]: I1128 22:07:08.833828 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" path="/var/lib/kubelet/pods/1c534667-9488-471d-8421-2b31d5c7c04b/volumes" Nov 28 22:07:19 crc kubenswrapper[4957]: I1128 22:07:19.813682 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:07:19 crc kubenswrapper[4957]: E1128 22:07:19.814448 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:07:34 crc kubenswrapper[4957]: I1128 22:07:34.813180 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:07:34 crc kubenswrapper[4957]: E1128 22:07:34.813973 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:07:46 crc kubenswrapper[4957]: I1128 22:07:46.814583 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:07:46 crc kubenswrapper[4957]: E1128 22:07:46.815959 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:07:59 crc kubenswrapper[4957]: I1128 22:07:59.813331 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:07:59 crc kubenswrapper[4957]: E1128 22:07:59.814052 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:08:13 crc kubenswrapper[4957]: I1128 22:08:13.814162 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:08:13 crc kubenswrapper[4957]: E1128 22:08:13.816171 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.520843 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 22:08:26 crc kubenswrapper[4957]: E1128 22:08:26.522020 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="registry-server" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.522039 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="registry-server" Nov 28 22:08:26 crc kubenswrapper[4957]: E1128 22:08:26.522053 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="extract-utilities" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.522063 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="extract-utilities" Nov 28 22:08:26 crc kubenswrapper[4957]: E1128 22:08:26.522111 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="extract-content" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.522119 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="extract-content" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.522423 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c534667-9488-471d-8421-2b31d5c7c04b" containerName="registry-server" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.523528 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.528072 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.528297 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.528302 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.532674 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jg9m9" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.535652 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.657512 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.657947 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658274 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658546 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658619 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9jj\" (UniqueName: \"kubernetes.io/projected/d0047755-5ddc-48c8-a4eb-4bf540cb695f-kube-api-access-9q9jj\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658679 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-config-data\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658745 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658846 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.658875 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761045 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761162 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761266 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761293 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9jj\" (UniqueName: \"kubernetes.io/projected/d0047755-5ddc-48c8-a4eb-4bf540cb695f-kube-api-access-9q9jj\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761312 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-config-data\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761376 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761420 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761447 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761513 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.761691 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.762058 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.762081 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.763063 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.763324 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-config-data\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.770185 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.771134 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.772570 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.780364 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9jj\" (UniqueName: \"kubernetes.io/projected/d0047755-5ddc-48c8-a4eb-4bf540cb695f-kube-api-access-9q9jj\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.795781 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " pod="openstack/tempest-tests-tempest" Nov 28 22:08:26 crc kubenswrapper[4957]: I1128 22:08:26.853183 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 22:08:27 crc kubenswrapper[4957]: I1128 22:08:27.328852 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 22:08:27 crc kubenswrapper[4957]: I1128 22:08:27.755877 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d0047755-5ddc-48c8-a4eb-4bf540cb695f","Type":"ContainerStarted","Data":"2c36875f04fb6b924af92d13ec13a6943d04b2db97606c7fe6d3463cfdc5496c"} Nov 28 22:08:28 crc kubenswrapper[4957]: I1128 22:08:28.813391 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:08:28 crc kubenswrapper[4957]: E1128 22:08:28.813861 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.716967 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bf7f4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.717848 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" podUID="74828378-0762-464d-b1c5-bda879361119" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.716967 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.716976 4957 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bf7f4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.718000 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.718069 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bf7f4" podUID="74828378-0762-464d-b1c5-bda879361119" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.717026 4957 patch_prober.go:28] interesting pod/router-default-5444994796-t54js container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 22:08:36 crc kubenswrapper[4957]: I1128 22:08:36.718130 4957 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-t54js" podUID="67aafc66-e89d-468e-b26c-c6cd8c842020" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 22:08:39 crc kubenswrapper[4957]: I1128 22:08:39.814547 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:08:39 crc kubenswrapper[4957]: E1128 22:08:39.815364 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:08:54 crc kubenswrapper[4957]: I1128 22:08:54.813143 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:08:54 crc kubenswrapper[4957]: E1128 22:08:54.814083 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:08:56 crc kubenswrapper[4957]: E1128 22:08:56.921543 4957 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 28 22:08:56 crc kubenswrapper[4957]: E1128 22:08:56.923139 4957 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9q9jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d0047755-5ddc-48c8-a4eb-4bf540cb695f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 22:08:56 crc kubenswrapper[4957]: E1128 22:08:56.924681 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d0047755-5ddc-48c8-a4eb-4bf540cb695f" Nov 28 22:08:57 crc kubenswrapper[4957]: E1128 22:08:57.089635 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d0047755-5ddc-48c8-a4eb-4bf540cb695f" Nov 28 22:09:07 crc kubenswrapper[4957]: I1128 22:09:07.813585 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:09:07 crc kubenswrapper[4957]: E1128 22:09:07.814506 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:09:10 crc kubenswrapper[4957]: I1128 22:09:10.221286 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d0047755-5ddc-48c8-a4eb-4bf540cb695f","Type":"ContainerStarted","Data":"139d81dab751f5bcb2bdcb1152d9b2f84c4a175deb77e611b29c9f87974da598"} Nov 28 22:09:10 crc kubenswrapper[4957]: I1128 22:09:10.247454 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.264239263 podStartE2EDuration="45.247433225s" podCreationTimestamp="2025-11-28 22:08:25 +0000 UTC" firstStartedPulling="2025-11-28 22:08:27.339440688 +0000 UTC m=+4746.808088597" lastFinishedPulling="2025-11-28 22:09:08.32263465 +0000 UTC m=+4787.791282559" observedRunningTime="2025-11-28 22:09:10.241446227 +0000 UTC m=+4789.710094126" watchObservedRunningTime="2025-11-28 22:09:10.247433225 +0000 UTC m=+4789.716081134" Nov 28 22:09:19 crc kubenswrapper[4957]: I1128 22:09:19.813770 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:09:19 crc kubenswrapper[4957]: E1128 22:09:19.814660 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:09:31 crc kubenswrapper[4957]: I1128 22:09:31.813656 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:09:31 crc kubenswrapper[4957]: E1128 22:09:31.814453 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:09:45 crc kubenswrapper[4957]: I1128 22:09:45.813496 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:09:45 crc kubenswrapper[4957]: E1128 22:09:45.814305 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:09:56 crc kubenswrapper[4957]: I1128 22:09:56.813618 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:09:56 crc kubenswrapper[4957]: E1128 22:09:56.814349 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:10:07 crc kubenswrapper[4957]: I1128 22:10:07.814547 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:10:07 crc kubenswrapper[4957]: E1128 22:10:07.815653 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:10:20 crc kubenswrapper[4957]: I1128 22:10:20.823777 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:10:20 crc kubenswrapper[4957]: E1128 22:10:20.824640 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:10:32 crc kubenswrapper[4957]: I1128 22:10:32.814442 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:10:32 crc kubenswrapper[4957]: E1128 22:10:32.815341 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:10:45 crc kubenswrapper[4957]: I1128 22:10:45.813747 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:10:46 crc kubenswrapper[4957]: I1128 22:10:46.274454 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"9166f2a7d387e97c10b8811d9db82182bfa8d84f64cfa1e86237a00e24d90881"} Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.432088 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m4k96"] Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.436850 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.449700 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4k96"] Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.543788 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-catalog-content\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.543895 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-utilities\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.544392 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfps\" (UniqueName: \"kubernetes.io/projected/2395d2d9-eb1c-4aec-9a76-0672bed77191-kube-api-access-xhfps\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.646445 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-utilities\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.646684 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfps\" (UniqueName: \"kubernetes.io/projected/2395d2d9-eb1c-4aec-9a76-0672bed77191-kube-api-access-xhfps\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.646787 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-catalog-content\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.647000 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-utilities\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.647178 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-catalog-content\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.668502 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfps\" (UniqueName: \"kubernetes.io/projected/2395d2d9-eb1c-4aec-9a76-0672bed77191-kube-api-access-xhfps\") pod \"certified-operators-m4k96\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:28 crc kubenswrapper[4957]: I1128 22:12:28.761937 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:29 crc kubenswrapper[4957]: I1128 22:12:29.482033 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4k96"] Nov 28 22:12:30 crc kubenswrapper[4957]: I1128 22:12:30.395189 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerDied","Data":"8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a"} Nov 28 22:12:30 crc kubenswrapper[4957]: I1128 22:12:30.395550 4957 generic.go:334] "Generic (PLEG): container finished" podID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerID="8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a" exitCode=0 Nov 28 22:12:30 crc kubenswrapper[4957]: I1128 22:12:30.395594 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerStarted","Data":"36c17dfe078d5d3d4e538dd5057f98939381b7994c7e1cd47acc84dda49f4e0f"} Nov 28 22:12:30 crc kubenswrapper[4957]: I1128 22:12:30.398608 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.407438 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerStarted","Data":"2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996"} Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.621862 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rldv"] Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.624824 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.634334 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rldv"] Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.815638 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9nz\" (UniqueName: \"kubernetes.io/projected/a244a33e-4114-43ee-9854-5db407d23e85-kube-api-access-rq9nz\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.815693 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-catalog-content\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.815734 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-utilities\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.919499 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq9nz\" (UniqueName: \"kubernetes.io/projected/a244a33e-4114-43ee-9854-5db407d23e85-kube-api-access-rq9nz\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.919905 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-catalog-content\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.919976 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-utilities\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.920318 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-catalog-content\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.921479 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-utilities\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.945753 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq9nz\" (UniqueName: \"kubernetes.io/projected/a244a33e-4114-43ee-9854-5db407d23e85-kube-api-access-rq9nz\") pod \"redhat-marketplace-5rldv\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:31 crc kubenswrapper[4957]: I1128 22:12:31.947477 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:32 crc kubenswrapper[4957]: E1128 22:12:32.429624 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2395d2d9_eb1c_4aec_9a76_0672bed77191.slice/crio-2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996.scope\": RecentStats: unable to find data in memory cache]" Nov 28 22:12:32 crc kubenswrapper[4957]: I1128 22:12:32.554556 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rldv"] Nov 28 22:12:32 crc kubenswrapper[4957]: W1128 22:12:32.582500 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda244a33e_4114_43ee_9854_5db407d23e85.slice/crio-b20d01e888fe72dc4d9df1f48a3ffc75bdea5c60f5fe57d7a35b67c05a8ef85b WatchSource:0}: Error finding container b20d01e888fe72dc4d9df1f48a3ffc75bdea5c60f5fe57d7a35b67c05a8ef85b: Status 404 returned error can't find the container with id b20d01e888fe72dc4d9df1f48a3ffc75bdea5c60f5fe57d7a35b67c05a8ef85b Nov 28 22:12:33 crc kubenswrapper[4957]: I1128 22:12:33.433665 4957 generic.go:334] "Generic (PLEG): container finished" podID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerID="2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996" exitCode=0 Nov 28 22:12:33 crc kubenswrapper[4957]: I1128 22:12:33.433802 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerDied","Data":"2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996"} Nov 28 22:12:33 crc kubenswrapper[4957]: I1128 22:12:33.438113 4957 generic.go:334] "Generic (PLEG): container finished" podID="a244a33e-4114-43ee-9854-5db407d23e85" containerID="faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620" exitCode=0 Nov 28 22:12:33 crc kubenswrapper[4957]: I1128 22:12:33.438433 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerDied","Data":"faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620"} Nov 28 22:12:33 crc kubenswrapper[4957]: I1128 22:12:33.438501 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerStarted","Data":"b20d01e888fe72dc4d9df1f48a3ffc75bdea5c60f5fe57d7a35b67c05a8ef85b"} Nov 28 22:12:34 crc kubenswrapper[4957]: I1128 22:12:34.450593 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerStarted","Data":"cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66"} Nov 28 22:12:34 crc kubenswrapper[4957]: I1128 22:12:34.453998 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerStarted","Data":"cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990"} Nov 28 22:12:34 crc kubenswrapper[4957]: I1128 22:12:34.609194 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m4k96" podStartSLOduration=3.100815404 podStartE2EDuration="6.6088544s" podCreationTimestamp="2025-11-28 22:12:28 +0000 UTC" firstStartedPulling="2025-11-28 22:12:30.397516779 +0000 UTC m=+4989.866164688" lastFinishedPulling="2025-11-28 22:12:33.905555775 +0000 UTC m=+4993.374203684" observedRunningTime="2025-11-28 22:12:34.606711357 +0000 UTC m=+4994.075359266" watchObservedRunningTime="2025-11-28 22:12:34.6088544 +0000 UTC m=+4994.077502309" Nov 28 22:12:35 crc kubenswrapper[4957]: I1128 22:12:35.464626 4957 generic.go:334] "Generic (PLEG): container finished" podID="a244a33e-4114-43ee-9854-5db407d23e85" containerID="cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66" exitCode=0 Nov 28 22:12:35 crc kubenswrapper[4957]: I1128 22:12:35.464672 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerDied","Data":"cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66"} Nov 28 22:12:36 crc kubenswrapper[4957]: I1128 22:12:36.477001 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerStarted","Data":"1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905"} Nov 28 22:12:36 crc kubenswrapper[4957]: I1128 22:12:36.495906 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rldv" podStartSLOduration=2.983507141 podStartE2EDuration="5.495890679s" podCreationTimestamp="2025-11-28 22:12:31 +0000 UTC" firstStartedPulling="2025-11-28 22:12:33.440122997 +0000 UTC m=+4992.908770936" lastFinishedPulling="2025-11-28 22:12:35.952506565 +0000 UTC m=+4995.421154474" observedRunningTime="2025-11-28 22:12:36.491683625 +0000 UTC m=+4995.960331524" watchObservedRunningTime="2025-11-28 22:12:36.495890679 +0000 UTC m=+4995.964538588" Nov 28 22:12:38 crc kubenswrapper[4957]: I1128 22:12:38.762566 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:38 crc kubenswrapper[4957]: I1128 22:12:38.763110 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:39 crc kubenswrapper[4957]: I1128 22:12:39.815345 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m4k96" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="registry-server" probeResult="failure" output=< Nov 28 22:12:39 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 22:12:39 crc kubenswrapper[4957]: > Nov 28 22:12:41 crc kubenswrapper[4957]: I1128 22:12:41.948584 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:41 crc kubenswrapper[4957]: I1128 22:12:41.948906 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:41 crc kubenswrapper[4957]: I1128 22:12:41.999192 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:42 crc kubenswrapper[4957]: I1128 22:12:42.582819 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:45 crc kubenswrapper[4957]: I1128 22:12:45.622247 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rldv"] Nov 28 22:12:45 crc kubenswrapper[4957]: I1128 22:12:45.623385 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rldv" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="registry-server" containerID="cri-o://1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905" gracePeriod=2 Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.258808 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.351236 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq9nz\" (UniqueName: \"kubernetes.io/projected/a244a33e-4114-43ee-9854-5db407d23e85-kube-api-access-rq9nz\") pod \"a244a33e-4114-43ee-9854-5db407d23e85\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.351352 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-catalog-content\") pod \"a244a33e-4114-43ee-9854-5db407d23e85\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.351566 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-utilities\") pod \"a244a33e-4114-43ee-9854-5db407d23e85\" (UID: \"a244a33e-4114-43ee-9854-5db407d23e85\") " Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.353323 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-utilities" (OuterVolumeSpecName: "utilities") pod "a244a33e-4114-43ee-9854-5db407d23e85" (UID: "a244a33e-4114-43ee-9854-5db407d23e85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.383558 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a244a33e-4114-43ee-9854-5db407d23e85-kube-api-access-rq9nz" (OuterVolumeSpecName: "kube-api-access-rq9nz") pod "a244a33e-4114-43ee-9854-5db407d23e85" (UID: "a244a33e-4114-43ee-9854-5db407d23e85"). InnerVolumeSpecName "kube-api-access-rq9nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.393394 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a244a33e-4114-43ee-9854-5db407d23e85" (UID: "a244a33e-4114-43ee-9854-5db407d23e85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.457758 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.457796 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq9nz\" (UniqueName: \"kubernetes.io/projected/a244a33e-4114-43ee-9854-5db407d23e85-kube-api-access-rq9nz\") on node \"crc\" DevicePath \"\"" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.457806 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a244a33e-4114-43ee-9854-5db407d23e85-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.573318 4957 generic.go:334] "Generic (PLEG): container finished" podID="a244a33e-4114-43ee-9854-5db407d23e85" containerID="1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905" exitCode=0 Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.573359 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerDied","Data":"1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905"} Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.573383 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rldv" event={"ID":"a244a33e-4114-43ee-9854-5db407d23e85","Type":"ContainerDied","Data":"b20d01e888fe72dc4d9df1f48a3ffc75bdea5c60f5fe57d7a35b67c05a8ef85b"} Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.573400 4957 scope.go:117] "RemoveContainer" containerID="1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.573517 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rldv" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.612833 4957 scope.go:117] "RemoveContainer" containerID="cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.618417 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rldv"] Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.631296 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rldv"] Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.635353 4957 scope.go:117] "RemoveContainer" containerID="faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.691586 4957 scope.go:117] "RemoveContainer" containerID="1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905" Nov 28 22:12:46 crc kubenswrapper[4957]: E1128 22:12:46.692005 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905\": container with ID starting with 1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905 not found: ID does not exist" containerID="1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.692353 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905"} err="failed to get container status \"1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905\": rpc error: code = NotFound desc = could not find container \"1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905\": container with ID starting with 1b7c9f6c288dd37b92e7fbd46032601cec8b38901918ef2b832cb43462218905 not found: ID does not exist" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.692411 4957 scope.go:117] "RemoveContainer" containerID="cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66" Nov 28 22:12:46 crc kubenswrapper[4957]: E1128 22:12:46.692701 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66\": container with ID starting with cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66 not found: ID does not exist" containerID="cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.692740 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66"} err="failed to get container status \"cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66\": rpc error: code = NotFound desc = could not find container \"cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66\": container with ID starting with cd17349e54f76cbba6235011a63019ad85ec073de536cb40552e96d082705f66 not found: ID does not exist" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.692766 4957 scope.go:117] "RemoveContainer" containerID="faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620" Nov 28 22:12:46 crc kubenswrapper[4957]: E1128 22:12:46.693165 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620\": container with ID starting with faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620 not found: ID does not exist" containerID="faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.693485 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620"} err="failed to get container status \"faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620\": rpc error: code = NotFound desc = could not find container \"faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620\": container with ID starting with faf90bebdb21db48ef2e09e1b07f5c899f41045863d714edb6abceb979352620 not found: ID does not exist" Nov 28 22:12:46 crc kubenswrapper[4957]: I1128 22:12:46.826038 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a244a33e-4114-43ee-9854-5db407d23e85" path="/var/lib/kubelet/pods/a244a33e-4114-43ee-9854-5db407d23e85/volumes" Nov 28 22:12:48 crc kubenswrapper[4957]: I1128 22:12:48.825202 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:48 crc kubenswrapper[4957]: I1128 22:12:48.884188 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:49 crc kubenswrapper[4957]: I1128 22:12:49.222481 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m4k96"] Nov 28 22:12:50 crc kubenswrapper[4957]: I1128 22:12:50.618375 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m4k96" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="registry-server" containerID="cri-o://cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990" gracePeriod=2 Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.310556 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.369822 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfps\" (UniqueName: \"kubernetes.io/projected/2395d2d9-eb1c-4aec-9a76-0672bed77191-kube-api-access-xhfps\") pod \"2395d2d9-eb1c-4aec-9a76-0672bed77191\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.370007 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-catalog-content\") pod \"2395d2d9-eb1c-4aec-9a76-0672bed77191\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.370183 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-utilities\") pod \"2395d2d9-eb1c-4aec-9a76-0672bed77191\" (UID: \"2395d2d9-eb1c-4aec-9a76-0672bed77191\") " Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.371474 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-utilities" (OuterVolumeSpecName: "utilities") pod "2395d2d9-eb1c-4aec-9a76-0672bed77191" (UID: "2395d2d9-eb1c-4aec-9a76-0672bed77191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.385313 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2395d2d9-eb1c-4aec-9a76-0672bed77191-kube-api-access-xhfps" (OuterVolumeSpecName: "kube-api-access-xhfps") pod "2395d2d9-eb1c-4aec-9a76-0672bed77191" (UID: "2395d2d9-eb1c-4aec-9a76-0672bed77191"). InnerVolumeSpecName "kube-api-access-xhfps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.424700 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2395d2d9-eb1c-4aec-9a76-0672bed77191" (UID: "2395d2d9-eb1c-4aec-9a76-0672bed77191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.473157 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.473193 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhfps\" (UniqueName: \"kubernetes.io/projected/2395d2d9-eb1c-4aec-9a76-0672bed77191-kube-api-access-xhfps\") on node \"crc\" DevicePath \"\"" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.473204 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2395d2d9-eb1c-4aec-9a76-0672bed77191-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.631498 4957 generic.go:334] "Generic (PLEG): container finished" podID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerID="cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990" exitCode=0 Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.631541 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerDied","Data":"cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990"} Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.631571 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4k96" event={"ID":"2395d2d9-eb1c-4aec-9a76-0672bed77191","Type":"ContainerDied","Data":"36c17dfe078d5d3d4e538dd5057f98939381b7994c7e1cd47acc84dda49f4e0f"} Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.631589 4957 scope.go:117] "RemoveContainer" containerID="cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.631707 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4k96" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.665240 4957 scope.go:117] "RemoveContainer" containerID="2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.669074 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m4k96"] Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.680345 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m4k96"] Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.700845 4957 scope.go:117] "RemoveContainer" containerID="8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.745490 4957 scope.go:117] "RemoveContainer" containerID="cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990" Nov 28 22:12:51 crc kubenswrapper[4957]: E1128 22:12:51.745961 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990\": container with ID starting with cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990 not found: ID does not exist" containerID="cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.746086 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990"} err="failed to get container status \"cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990\": rpc error: code = NotFound desc = could not find container \"cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990\": container with ID starting with cd5dc0083fbf63fea690ad3251025815652e293114a9fb4323a085a80fc51990 not found: ID does not exist" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.746176 4957 scope.go:117] "RemoveContainer" containerID="2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996" Nov 28 22:12:51 crc kubenswrapper[4957]: E1128 22:12:51.746580 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996\": container with ID starting with 2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996 not found: ID does not exist" containerID="2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.746633 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996"} err="failed to get container status \"2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996\": rpc error: code = NotFound desc = could not find container \"2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996\": container with ID starting with 2f7071b5063de79b9d15afed05c753abe3ec96faf16e6011d39f26ce6cd90996 not found: ID does not exist" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.746658 4957 scope.go:117] "RemoveContainer" containerID="8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a" Nov 28 22:12:51 crc kubenswrapper[4957]: E1128 22:12:51.746943 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a\": container with ID starting with 8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a not found: ID does not exist" containerID="8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a" Nov 28 22:12:51 crc kubenswrapper[4957]: I1128 22:12:51.746967 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a"} err="failed to get container status \"8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a\": rpc error: code = NotFound desc = could not find container \"8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a\": container with ID starting with 8cd3e552fb7529a50e6462697c558a0a0a408ed26029b1c13442b3531199b80a not found: ID does not exist" Nov 28 22:12:52 crc kubenswrapper[4957]: I1128 22:12:52.825712 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" path="/var/lib/kubelet/pods/2395d2d9-eb1c-4aec-9a76-0672bed77191/volumes" Nov 28 22:13:08 crc kubenswrapper[4957]: I1128 22:13:08.992278 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:13:08 crc kubenswrapper[4957]: I1128 22:13:08.992715 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:13:38 crc kubenswrapper[4957]: I1128 22:13:38.992425 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:13:38 crc kubenswrapper[4957]: I1128 22:13:38.992940 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:14:08 crc kubenswrapper[4957]: I1128 22:14:08.992391 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:14:08 crc kubenswrapper[4957]: I1128 22:14:08.993108 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:14:08 crc kubenswrapper[4957]: I1128 22:14:08.993150 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:14:08 crc kubenswrapper[4957]: I1128 22:14:08.994098 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9166f2a7d387e97c10b8811d9db82182bfa8d84f64cfa1e86237a00e24d90881"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:14:08 crc kubenswrapper[4957]: I1128 22:14:08.994164 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://9166f2a7d387e97c10b8811d9db82182bfa8d84f64cfa1e86237a00e24d90881" gracePeriod=600 Nov 28 22:14:09 crc kubenswrapper[4957]: I1128 22:14:09.444985 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="9166f2a7d387e97c10b8811d9db82182bfa8d84f64cfa1e86237a00e24d90881" exitCode=0 Nov 28 22:14:09 crc kubenswrapper[4957]: I1128 22:14:09.445054 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"9166f2a7d387e97c10b8811d9db82182bfa8d84f64cfa1e86237a00e24d90881"} Nov 28 22:14:09 crc kubenswrapper[4957]: I1128 22:14:09.445341 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf"} Nov 28 22:14:09 crc kubenswrapper[4957]: I1128 22:14:09.445362 4957 scope.go:117] "RemoveContainer" containerID="a4ec5f6fb2e6f83418657c147c2a2182c002506417b54392c61d7974ea1698a2" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.651144 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dwrc"] Nov 28 22:14:59 crc kubenswrapper[4957]: E1128 22:14:59.652512 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="extract-content" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652545 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="extract-content" Nov 28 22:14:59 crc kubenswrapper[4957]: E1128 22:14:59.652578 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="extract-utilities" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652587 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="extract-utilities" Nov 28 22:14:59 crc kubenswrapper[4957]: E1128 22:14:59.652595 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="registry-server" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652603 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="registry-server" Nov 28 22:14:59 crc kubenswrapper[4957]: E1128 22:14:59.652646 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="registry-server" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652654 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="registry-server" Nov 28 22:14:59 crc kubenswrapper[4957]: E1128 22:14:59.652672 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="extract-utilities" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652679 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="extract-utilities" Nov 28 22:14:59 crc kubenswrapper[4957]: E1128 22:14:59.652701 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="extract-content" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652709 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="extract-content" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652940 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="a244a33e-4114-43ee-9854-5db407d23e85" containerName="registry-server" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.652963 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="2395d2d9-eb1c-4aec-9a76-0672bed77191" containerName="registry-server" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.655002 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.665512 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dwrc"] Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.778591 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-catalog-content\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.778745 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-utilities\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.778822 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb67s\" (UniqueName: \"kubernetes.io/projected/8a1086a7-7268-4def-b218-db0163d1352b-kube-api-access-xb67s\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.880995 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-catalog-content\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.881138 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-utilities\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.881239 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb67s\" (UniqueName: \"kubernetes.io/projected/8a1086a7-7268-4def-b218-db0163d1352b-kube-api-access-xb67s\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.881647 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-catalog-content\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.882013 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-utilities\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.902719 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb67s\" (UniqueName: \"kubernetes.io/projected/8a1086a7-7268-4def-b218-db0163d1352b-kube-api-access-xb67s\") pod \"redhat-operators-4dwrc\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:14:59 crc kubenswrapper[4957]: I1128 22:14:59.983951 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.164808 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx"] Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.167346 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.174145 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.176158 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.283042 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx"] Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.291925 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-config-volume\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.291991 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnl4k\" (UniqueName: \"kubernetes.io/projected/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-kube-api-access-nnl4k\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.292112 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-secret-volume\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.394586 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-secret-volume\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.394781 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-config-volume\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.394833 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnl4k\" (UniqueName: \"kubernetes.io/projected/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-kube-api-access-nnl4k\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.395713 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-config-volume\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.404155 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-secret-volume\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.421358 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnl4k\" (UniqueName: \"kubernetes.io/projected/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-kube-api-access-nnl4k\") pod \"collect-profiles-29406135-mwrtx\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.504057 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.551428 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dwrc"] Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.994932 4957 generic.go:334] "Generic (PLEG): container finished" podID="8a1086a7-7268-4def-b218-db0163d1352b" containerID="e772be0549c40a962e53e3a451307fcffce7112c4f6d00c402d991497a9938d4" exitCode=0 Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.995350 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerDied","Data":"e772be0549c40a962e53e3a451307fcffce7112c4f6d00c402d991497a9938d4"} Nov 28 22:15:00 crc kubenswrapper[4957]: I1128 22:15:00.996903 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerStarted","Data":"4dd127f2babff9c78aef07ca13ba1b596eb2411bdfe6f5bd255709ffc3ebfe0e"} Nov 28 22:15:01 crc kubenswrapper[4957]: I1128 22:15:01.040873 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx"] Nov 28 22:15:01 crc kubenswrapper[4957]: W1128 22:15:01.041842 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b06774_3c21_4a96_9c1f_ed927d4ccfa6.slice/crio-fb4af165c0c233edfc2be33a9c4b30163ab0182d4ead349da1b7e638f796342c WatchSource:0}: Error finding container fb4af165c0c233edfc2be33a9c4b30163ab0182d4ead349da1b7e638f796342c: Status 404 returned error can't find the container with id fb4af165c0c233edfc2be33a9c4b30163ab0182d4ead349da1b7e638f796342c Nov 28 22:15:02 crc kubenswrapper[4957]: I1128 22:15:02.013478 4957 generic.go:334] "Generic (PLEG): container finished" podID="50b06774-3c21-4a96-9c1f-ed927d4ccfa6" containerID="71024850a7e6eaf4f9e5beb1360a248f5c6c5ab0b766c18907fdb6b0ad5b6364" exitCode=0 Nov 28 22:15:02 crc kubenswrapper[4957]: I1128 22:15:02.014252 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" event={"ID":"50b06774-3c21-4a96-9c1f-ed927d4ccfa6","Type":"ContainerDied","Data":"71024850a7e6eaf4f9e5beb1360a248f5c6c5ab0b766c18907fdb6b0ad5b6364"} Nov 28 22:15:02 crc kubenswrapper[4957]: I1128 22:15:02.014285 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" event={"ID":"50b06774-3c21-4a96-9c1f-ed927d4ccfa6","Type":"ContainerStarted","Data":"fb4af165c0c233edfc2be33a9c4b30163ab0182d4ead349da1b7e638f796342c"} Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.036620 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerStarted","Data":"286bbfe33e80496ee484ceb9ddee2ae65fea3c44c8191f4f8e8ac1ef639b3ca8"} Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.540435 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.594552 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-config-volume\") pod \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.594705 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnl4k\" (UniqueName: \"kubernetes.io/projected/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-kube-api-access-nnl4k\") pod \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.594824 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-secret-volume\") pod \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\" (UID: \"50b06774-3c21-4a96-9c1f-ed927d4ccfa6\") " Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.595806 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-config-volume" (OuterVolumeSpecName: "config-volume") pod "50b06774-3c21-4a96-9c1f-ed927d4ccfa6" (UID: "50b06774-3c21-4a96-9c1f-ed927d4ccfa6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.596009 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.607649 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-kube-api-access-nnl4k" (OuterVolumeSpecName: "kube-api-access-nnl4k") pod "50b06774-3c21-4a96-9c1f-ed927d4ccfa6" (UID: "50b06774-3c21-4a96-9c1f-ed927d4ccfa6"). InnerVolumeSpecName "kube-api-access-nnl4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.625736 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50b06774-3c21-4a96-9c1f-ed927d4ccfa6" (UID: "50b06774-3c21-4a96-9c1f-ed927d4ccfa6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.697704 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:15:03 crc kubenswrapper[4957]: I1128 22:15:03.697739 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnl4k\" (UniqueName: \"kubernetes.io/projected/50b06774-3c21-4a96-9c1f-ed927d4ccfa6-kube-api-access-nnl4k\") on node \"crc\" DevicePath \"\"" Nov 28 22:15:04 crc kubenswrapper[4957]: I1128 22:15:04.068245 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" Nov 28 22:15:04 crc kubenswrapper[4957]: I1128 22:15:04.071560 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406135-mwrtx" event={"ID":"50b06774-3c21-4a96-9c1f-ed927d4ccfa6","Type":"ContainerDied","Data":"fb4af165c0c233edfc2be33a9c4b30163ab0182d4ead349da1b7e638f796342c"} Nov 28 22:15:04 crc kubenswrapper[4957]: I1128 22:15:04.071630 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4af165c0c233edfc2be33a9c4b30163ab0182d4ead349da1b7e638f796342c" Nov 28 22:15:04 crc kubenswrapper[4957]: I1128 22:15:04.839780 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq"] Nov 28 22:15:04 crc kubenswrapper[4957]: I1128 22:15:04.851501 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406090-2rmnq"] Nov 28 22:15:06 crc kubenswrapper[4957]: I1128 22:15:06.092523 4957 generic.go:334] "Generic (PLEG): container finished" podID="8a1086a7-7268-4def-b218-db0163d1352b" containerID="286bbfe33e80496ee484ceb9ddee2ae65fea3c44c8191f4f8e8ac1ef639b3ca8" exitCode=0 Nov 28 22:15:06 crc kubenswrapper[4957]: I1128 22:15:06.092631 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerDied","Data":"286bbfe33e80496ee484ceb9ddee2ae65fea3c44c8191f4f8e8ac1ef639b3ca8"} Nov 28 22:15:06 crc kubenswrapper[4957]: I1128 22:15:06.826554 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd4f374-1a16-4e94-ab55-5463be9dff02" path="/var/lib/kubelet/pods/cbd4f374-1a16-4e94-ab55-5463be9dff02/volumes" Nov 28 22:15:07 crc kubenswrapper[4957]: I1128 22:15:07.106205 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerStarted","Data":"0e77ff261f7f483b9bc9f1b3b6be8cf3e1ec9d1a6afde09cc0710e8b5cfea8b9"} Nov 28 22:15:07 crc kubenswrapper[4957]: I1128 22:15:07.130717 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dwrc" podStartSLOduration=2.633073584 podStartE2EDuration="8.130695432s" podCreationTimestamp="2025-11-28 22:14:59 +0000 UTC" firstStartedPulling="2025-11-28 22:15:00.996929948 +0000 UTC m=+5140.465577857" lastFinishedPulling="2025-11-28 22:15:06.494551796 +0000 UTC m=+5145.963199705" observedRunningTime="2025-11-28 22:15:07.12492938 +0000 UTC m=+5146.593577289" watchObservedRunningTime="2025-11-28 22:15:07.130695432 +0000 UTC m=+5146.599343341" Nov 28 22:15:09 crc kubenswrapper[4957]: I1128 22:15:09.984719 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:09 crc kubenswrapper[4957]: I1128 22:15:09.985549 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:11 crc kubenswrapper[4957]: I1128 22:15:11.032104 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dwrc" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="registry-server" probeResult="failure" output=< Nov 28 22:15:11 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 22:15:11 crc kubenswrapper[4957]: > Nov 28 22:15:20 crc kubenswrapper[4957]: I1128 22:15:20.239034 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:20 crc kubenswrapper[4957]: I1128 22:15:20.304912 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:20 crc kubenswrapper[4957]: I1128 22:15:20.483046 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dwrc"] Nov 28 22:15:21 crc kubenswrapper[4957]: I1128 22:15:21.276557 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dwrc" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="registry-server" containerID="cri-o://0e77ff261f7f483b9bc9f1b3b6be8cf3e1ec9d1a6afde09cc0710e8b5cfea8b9" gracePeriod=2 Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.291862 4957 generic.go:334] "Generic (PLEG): container finished" podID="8a1086a7-7268-4def-b218-db0163d1352b" containerID="0e77ff261f7f483b9bc9f1b3b6be8cf3e1ec9d1a6afde09cc0710e8b5cfea8b9" exitCode=0 Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.291928 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerDied","Data":"0e77ff261f7f483b9bc9f1b3b6be8cf3e1ec9d1a6afde09cc0710e8b5cfea8b9"} Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.508270 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.574512 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb67s\" (UniqueName: \"kubernetes.io/projected/8a1086a7-7268-4def-b218-db0163d1352b-kube-api-access-xb67s\") pod \"8a1086a7-7268-4def-b218-db0163d1352b\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.575136 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-catalog-content\") pod \"8a1086a7-7268-4def-b218-db0163d1352b\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.575323 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-utilities\") pod \"8a1086a7-7268-4def-b218-db0163d1352b\" (UID: \"8a1086a7-7268-4def-b218-db0163d1352b\") " Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.576342 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-utilities" (OuterVolumeSpecName: "utilities") pod "8a1086a7-7268-4def-b218-db0163d1352b" (UID: "8a1086a7-7268-4def-b218-db0163d1352b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.585438 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1086a7-7268-4def-b218-db0163d1352b-kube-api-access-xb67s" (OuterVolumeSpecName: "kube-api-access-xb67s") pod "8a1086a7-7268-4def-b218-db0163d1352b" (UID: "8a1086a7-7268-4def-b218-db0163d1352b"). InnerVolumeSpecName "kube-api-access-xb67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.678862 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a1086a7-7268-4def-b218-db0163d1352b" (UID: "8a1086a7-7268-4def-b218-db0163d1352b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.683341 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb67s\" (UniqueName: \"kubernetes.io/projected/8a1086a7-7268-4def-b218-db0163d1352b-kube-api-access-xb67s\") on node \"crc\" DevicePath \"\"" Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.683382 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:15:22 crc kubenswrapper[4957]: I1128 22:15:22.683392 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1086a7-7268-4def-b218-db0163d1352b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.305141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dwrc" event={"ID":"8a1086a7-7268-4def-b218-db0163d1352b","Type":"ContainerDied","Data":"4dd127f2babff9c78aef07ca13ba1b596eb2411bdfe6f5bd255709ffc3ebfe0e"} Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.306127 4957 scope.go:117] "RemoveContainer" containerID="0e77ff261f7f483b9bc9f1b3b6be8cf3e1ec9d1a6afde09cc0710e8b5cfea8b9" Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.305344 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dwrc" Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.336919 4957 scope.go:117] "RemoveContainer" containerID="286bbfe33e80496ee484ceb9ddee2ae65fea3c44c8191f4f8e8ac1ef639b3ca8" Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.337603 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dwrc"] Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.349219 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dwrc"] Nov 28 22:15:23 crc kubenswrapper[4957]: I1128 22:15:23.360733 4957 scope.go:117] "RemoveContainer" containerID="e772be0549c40a962e53e3a451307fcffce7112c4f6d00c402d991497a9938d4" Nov 28 22:15:24 crc kubenswrapper[4957]: I1128 22:15:24.826048 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1086a7-7268-4def-b218-db0163d1352b" path="/var/lib/kubelet/pods/8a1086a7-7268-4def-b218-db0163d1352b/volumes" Nov 28 22:15:30 crc kubenswrapper[4957]: I1128 22:15:30.367192 4957 scope.go:117] "RemoveContainer" containerID="73320f6106000424242a5077c8c8446a90172be2fca3eb9849e5fb6eb4c26862" Nov 28 22:16:38 crc kubenswrapper[4957]: I1128 22:16:38.992516 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:16:38 crc kubenswrapper[4957]: I1128 22:16:38.993054 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:17:08 crc kubenswrapper[4957]: I1128 22:17:08.992543 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:17:08 crc kubenswrapper[4957]: I1128 22:17:08.993791 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:17:38 crc kubenswrapper[4957]: I1128 22:17:38.992136 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:17:38 crc kubenswrapper[4957]: I1128 22:17:38.992669 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:17:38 crc kubenswrapper[4957]: I1128 22:17:38.992711 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:17:38 crc kubenswrapper[4957]: I1128 22:17:38.993563 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:17:38 crc kubenswrapper[4957]: I1128 22:17:38.993618 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" gracePeriod=600 Nov 28 22:17:39 crc kubenswrapper[4957]: E1128 22:17:39.117679 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:17:39 crc kubenswrapper[4957]: I1128 22:17:39.816100 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" exitCode=0 Nov 28 22:17:39 crc kubenswrapper[4957]: I1128 22:17:39.816144 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf"} Nov 28 22:17:39 crc kubenswrapper[4957]: I1128 22:17:39.816177 4957 scope.go:117] "RemoveContainer" containerID="9166f2a7d387e97c10b8811d9db82182bfa8d84f64cfa1e86237a00e24d90881" Nov 28 22:17:39 crc kubenswrapper[4957]: I1128 22:17:39.816535 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:17:39 crc kubenswrapper[4957]: E1128 22:17:39.816803 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:17:53 crc kubenswrapper[4957]: I1128 22:17:53.812511 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:17:53 crc kubenswrapper[4957]: E1128 22:17:53.813223 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:18:06 crc kubenswrapper[4957]: I1128 22:18:06.814162 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:18:06 crc kubenswrapper[4957]: E1128 22:18:06.814936 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:18:18 crc kubenswrapper[4957]: I1128 22:18:18.814900 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:18:18 crc kubenswrapper[4957]: E1128 22:18:18.816060 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.668720 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8b9bp"] Nov 28 22:18:20 crc kubenswrapper[4957]: E1128 22:18:20.669527 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="extract-content" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.669540 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="extract-content" Nov 28 22:18:20 crc kubenswrapper[4957]: E1128 22:18:20.669579 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b06774-3c21-4a96-9c1f-ed927d4ccfa6" containerName="collect-profiles" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.669586 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b06774-3c21-4a96-9c1f-ed927d4ccfa6" containerName="collect-profiles" Nov 28 22:18:20 crc kubenswrapper[4957]: E1128 22:18:20.669596 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="registry-server" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.669603 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="registry-server" Nov 28 22:18:20 crc kubenswrapper[4957]: E1128 22:18:20.669624 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="extract-utilities" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.669629 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="extract-utilities" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.669841 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b06774-3c21-4a96-9c1f-ed927d4ccfa6" containerName="collect-profiles" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.669856 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1086a7-7268-4def-b218-db0163d1352b" containerName="registry-server" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.671765 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.673830 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srsld\" (UniqueName: \"kubernetes.io/projected/1bf0b530-cb28-44ca-88f3-f7f719f40513-kube-api-access-srsld\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.673995 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-catalog-content\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.674035 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-utilities\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.689903 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8b9bp"] Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.776606 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srsld\" (UniqueName: \"kubernetes.io/projected/1bf0b530-cb28-44ca-88f3-f7f719f40513-kube-api-access-srsld\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.776712 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-catalog-content\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.776757 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-utilities\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.777162 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-catalog-content\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:20 crc kubenswrapper[4957]: I1128 22:18:20.777170 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-utilities\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:21 crc kubenswrapper[4957]: I1128 22:18:21.087912 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srsld\" (UniqueName: \"kubernetes.io/projected/1bf0b530-cb28-44ca-88f3-f7f719f40513-kube-api-access-srsld\") pod \"community-operators-8b9bp\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:21 crc kubenswrapper[4957]: I1128 22:18:21.304772 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:21 crc kubenswrapper[4957]: I1128 22:18:21.807882 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8b9bp"] Nov 28 22:18:22 crc kubenswrapper[4957]: I1128 22:18:22.312190 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerID="68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7" exitCode=0 Nov 28 22:18:22 crc kubenswrapper[4957]: I1128 22:18:22.312243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerDied","Data":"68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7"} Nov 28 22:18:22 crc kubenswrapper[4957]: I1128 22:18:22.312515 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerStarted","Data":"c9baf1b7521dccf41b9d37391a83d82d496c656204cb4df2adf6f7d13edd1f12"} Nov 28 22:18:22 crc kubenswrapper[4957]: I1128 22:18:22.314510 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 22:18:23 crc kubenswrapper[4957]: I1128 22:18:23.323958 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerStarted","Data":"9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7"} Nov 28 22:18:24 crc kubenswrapper[4957]: I1128 22:18:24.336750 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerID="9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7" exitCode=0 Nov 28 22:18:24 crc kubenswrapper[4957]: I1128 22:18:24.336814 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerDied","Data":"9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7"} Nov 28 22:18:25 crc kubenswrapper[4957]: I1128 22:18:25.350694 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerStarted","Data":"e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7"} Nov 28 22:18:25 crc kubenswrapper[4957]: I1128 22:18:25.367199 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8b9bp" podStartSLOduration=2.791585466 podStartE2EDuration="5.367178465s" podCreationTimestamp="2025-11-28 22:18:20 +0000 UTC" firstStartedPulling="2025-11-28 22:18:22.314238533 +0000 UTC m=+5341.782886442" lastFinishedPulling="2025-11-28 22:18:24.889831532 +0000 UTC m=+5344.358479441" observedRunningTime="2025-11-28 22:18:25.366061828 +0000 UTC m=+5344.834709737" watchObservedRunningTime="2025-11-28 22:18:25.367178465 +0000 UTC m=+5344.835826384" Nov 28 22:18:31 crc kubenswrapper[4957]: I1128 22:18:31.305601 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:31 crc kubenswrapper[4957]: I1128 22:18:31.306151 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:31 crc kubenswrapper[4957]: I1128 22:18:31.356569 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:31 crc kubenswrapper[4957]: I1128 22:18:31.474161 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:31 crc kubenswrapper[4957]: I1128 22:18:31.606224 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8b9bp"] Nov 28 22:18:33 crc kubenswrapper[4957]: I1128 22:18:33.428730 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8b9bp" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="registry-server" containerID="cri-o://e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7" gracePeriod=2 Nov 28 22:18:33 crc kubenswrapper[4957]: I1128 22:18:33.812496 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:18:33 crc kubenswrapper[4957]: E1128 22:18:33.812933 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:18:33 crc kubenswrapper[4957]: I1128 22:18:33.969673 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.079905 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-utilities\") pod \"1bf0b530-cb28-44ca-88f3-f7f719f40513\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.080325 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srsld\" (UniqueName: \"kubernetes.io/projected/1bf0b530-cb28-44ca-88f3-f7f719f40513-kube-api-access-srsld\") pod \"1bf0b530-cb28-44ca-88f3-f7f719f40513\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.080607 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-catalog-content\") pod \"1bf0b530-cb28-44ca-88f3-f7f719f40513\" (UID: \"1bf0b530-cb28-44ca-88f3-f7f719f40513\") " Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.081020 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-utilities" (OuterVolumeSpecName: "utilities") pod "1bf0b530-cb28-44ca-88f3-f7f719f40513" (UID: "1bf0b530-cb28-44ca-88f3-f7f719f40513"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.081631 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.089188 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf0b530-cb28-44ca-88f3-f7f719f40513-kube-api-access-srsld" (OuterVolumeSpecName: "kube-api-access-srsld") pod "1bf0b530-cb28-44ca-88f3-f7f719f40513" (UID: "1bf0b530-cb28-44ca-88f3-f7f719f40513"). InnerVolumeSpecName "kube-api-access-srsld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.153530 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bf0b530-cb28-44ca-88f3-f7f719f40513" (UID: "1bf0b530-cb28-44ca-88f3-f7f719f40513"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.183827 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srsld\" (UniqueName: \"kubernetes.io/projected/1bf0b530-cb28-44ca-88f3-f7f719f40513-kube-api-access-srsld\") on node \"crc\" DevicePath \"\"" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.183877 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bf0b530-cb28-44ca-88f3-f7f719f40513-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.442964 4957 generic.go:334] "Generic (PLEG): container finished" podID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerID="e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7" exitCode=0 Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.443018 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerDied","Data":"e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7"} Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.443090 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8b9bp" event={"ID":"1bf0b530-cb28-44ca-88f3-f7f719f40513","Type":"ContainerDied","Data":"c9baf1b7521dccf41b9d37391a83d82d496c656204cb4df2adf6f7d13edd1f12"} Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.443120 4957 scope.go:117] "RemoveContainer" containerID="e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.443034 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8b9bp" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.482436 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8b9bp"] Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.484401 4957 scope.go:117] "RemoveContainer" containerID="9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.494682 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8b9bp"] Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.515935 4957 scope.go:117] "RemoveContainer" containerID="68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.579264 4957 scope.go:117] "RemoveContainer" containerID="e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7" Nov 28 22:18:34 crc kubenswrapper[4957]: E1128 22:18:34.580232 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7\": container with ID starting with e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7 not found: ID does not exist" containerID="e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.580271 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7"} err="failed to get container status \"e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7\": rpc error: code = NotFound desc = could not find container \"e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7\": container with ID starting with e337ffdb646da63ab303253d820cb8e5f8dc1f788220013eedf44a794535e8b7 not found: ID does not exist" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.580292 4957 scope.go:117] "RemoveContainer" containerID="9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7" Nov 28 22:18:34 crc kubenswrapper[4957]: E1128 22:18:34.580864 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7\": container with ID starting with 9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7 not found: ID does not exist" containerID="9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.580912 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7"} err="failed to get container status \"9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7\": rpc error: code = NotFound desc = could not find container \"9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7\": container with ID starting with 9cf0da783db5ccc2301b72f71674c4ad76a5f91f90920f1b264cc3e8d98c85e7 not found: ID does not exist" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.580941 4957 scope.go:117] "RemoveContainer" containerID="68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7" Nov 28 22:18:34 crc kubenswrapper[4957]: E1128 22:18:34.581496 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7\": container with ID starting with 68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7 not found: ID does not exist" containerID="68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.581519 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7"} err="failed to get container status \"68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7\": rpc error: code = NotFound desc = could not find container \"68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7\": container with ID starting with 68c4f27fb1691fa27005346a0374a8c9453e16d03d696e0e2e5b190b1d370ef7 not found: ID does not exist" Nov 28 22:18:34 crc kubenswrapper[4957]: I1128 22:18:34.826370 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" path="/var/lib/kubelet/pods/1bf0b530-cb28-44ca-88f3-f7f719f40513/volumes" Nov 28 22:18:48 crc kubenswrapper[4957]: I1128 22:18:48.813403 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:18:48 crc kubenswrapper[4957]: E1128 22:18:48.814231 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:19:01 crc kubenswrapper[4957]: I1128 22:19:01.812675 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:19:01 crc kubenswrapper[4957]: E1128 22:19:01.813537 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:19:16 crc kubenswrapper[4957]: I1128 22:19:16.812981 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:19:16 crc kubenswrapper[4957]: E1128 22:19:16.814853 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:19:31 crc kubenswrapper[4957]: I1128 22:19:31.813307 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:19:31 crc kubenswrapper[4957]: E1128 22:19:31.814093 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:19:45 crc kubenswrapper[4957]: I1128 22:19:45.813611 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:19:45 crc kubenswrapper[4957]: E1128 22:19:45.814370 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:20:00 crc kubenswrapper[4957]: I1128 22:20:00.820879 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:20:00 crc kubenswrapper[4957]: E1128 22:20:00.821795 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:20:15 crc kubenswrapper[4957]: I1128 22:20:15.813301 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:20:15 crc kubenswrapper[4957]: E1128 22:20:15.814028 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:20:30 crc kubenswrapper[4957]: I1128 22:20:30.828808 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:20:30 crc kubenswrapper[4957]: E1128 22:20:30.830187 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:20:44 crc kubenswrapper[4957]: I1128 22:20:44.813465 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:20:44 crc kubenswrapper[4957]: E1128 22:20:44.814476 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:20:59 crc kubenswrapper[4957]: I1128 22:20:59.813293 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:20:59 crc kubenswrapper[4957]: E1128 22:20:59.814094 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:21:10 crc kubenswrapper[4957]: I1128 22:21:10.821045 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:21:10 crc kubenswrapper[4957]: E1128 22:21:10.822104 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:21:21 crc kubenswrapper[4957]: I1128 22:21:21.814061 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:21:21 crc kubenswrapper[4957]: E1128 22:21:21.814701 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:21:32 crc kubenswrapper[4957]: I1128 22:21:32.813780 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:21:32 crc kubenswrapper[4957]: E1128 22:21:32.814502 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:21:43 crc kubenswrapper[4957]: I1128 22:21:43.813132 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:21:43 crc kubenswrapper[4957]: E1128 22:21:43.813920 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:21:58 crc kubenswrapper[4957]: I1128 22:21:58.814191 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:21:58 crc kubenswrapper[4957]: E1128 22:21:58.815137 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:22:13 crc kubenswrapper[4957]: I1128 22:22:13.815348 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:22:13 crc kubenswrapper[4957]: E1128 22:22:13.816320 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:22:27 crc kubenswrapper[4957]: I1128 22:22:27.813128 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:22:27 crc kubenswrapper[4957]: E1128 22:22:27.813915 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:22:39 crc kubenswrapper[4957]: I1128 22:22:39.813278 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:22:40 crc kubenswrapper[4957]: I1128 22:22:40.513243 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"dcd3913b7c5339e971d93a9d0a6c9bbba13a49ba25d26a2d0a0084811e529b74"} Nov 28 22:22:51 crc kubenswrapper[4957]: I1128 22:22:51.922559 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wqc4d"] Nov 28 22:22:51 crc kubenswrapper[4957]: E1128 22:22:51.925086 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="extract-utilities" Nov 28 22:22:51 crc kubenswrapper[4957]: I1128 22:22:51.925198 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="extract-utilities" Nov 28 22:22:51 crc kubenswrapper[4957]: E1128 22:22:51.925311 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="extract-content" Nov 28 22:22:51 crc kubenswrapper[4957]: I1128 22:22:51.925398 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="extract-content" Nov 28 22:22:51 crc kubenswrapper[4957]: E1128 22:22:51.925500 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="registry-server" Nov 28 22:22:51 crc kubenswrapper[4957]: I1128 22:22:51.925559 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="registry-server" Nov 28 22:22:51 crc kubenswrapper[4957]: I1128 22:22:51.925830 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf0b530-cb28-44ca-88f3-f7f719f40513" containerName="registry-server" Nov 28 22:22:51 crc kubenswrapper[4957]: I1128 22:22:51.927795 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.017023 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-utilities\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.017100 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s88hp\" (UniqueName: \"kubernetes.io/projected/fc66531a-807f-448c-8946-4045319b238c-kube-api-access-s88hp\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.017136 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-catalog-content\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.070904 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqc4d"] Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.119587 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-utilities\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.119636 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s88hp\" (UniqueName: \"kubernetes.io/projected/fc66531a-807f-448c-8946-4045319b238c-kube-api-access-s88hp\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.119660 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-catalog-content\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.120089 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-catalog-content\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.120481 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-utilities\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.491375 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s88hp\" (UniqueName: \"kubernetes.io/projected/fc66531a-807f-448c-8946-4045319b238c-kube-api-access-s88hp\") pod \"redhat-marketplace-wqc4d\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:52 crc kubenswrapper[4957]: I1128 22:22:52.603201 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:22:53 crc kubenswrapper[4957]: I1128 22:22:53.049468 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqc4d"] Nov 28 22:22:53 crc kubenswrapper[4957]: W1128 22:22:53.050124 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc66531a_807f_448c_8946_4045319b238c.slice/crio-dd42b12527ab788df01f63e0cce9b36f3ec9382cf44349efa150cf640e550af9 WatchSource:0}: Error finding container dd42b12527ab788df01f63e0cce9b36f3ec9382cf44349efa150cf640e550af9: Status 404 returned error can't find the container with id dd42b12527ab788df01f63e0cce9b36f3ec9382cf44349efa150cf640e550af9 Nov 28 22:22:53 crc kubenswrapper[4957]: I1128 22:22:53.631015 4957 generic.go:334] "Generic (PLEG): container finished" podID="fc66531a-807f-448c-8946-4045319b238c" containerID="11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f" exitCode=0 Nov 28 22:22:53 crc kubenswrapper[4957]: I1128 22:22:53.631065 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerDied","Data":"11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f"} Nov 28 22:22:53 crc kubenswrapper[4957]: I1128 22:22:53.631100 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerStarted","Data":"dd42b12527ab788df01f63e0cce9b36f3ec9382cf44349efa150cf640e550af9"} Nov 28 22:22:54 crc kubenswrapper[4957]: I1128 22:22:54.644876 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerStarted","Data":"167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d"} Nov 28 22:22:55 crc kubenswrapper[4957]: I1128 22:22:55.655869 4957 generic.go:334] "Generic (PLEG): container finished" podID="fc66531a-807f-448c-8946-4045319b238c" containerID="167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d" exitCode=0 Nov 28 22:22:55 crc kubenswrapper[4957]: I1128 22:22:55.655911 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerDied","Data":"167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d"} Nov 28 22:22:56 crc kubenswrapper[4957]: I1128 22:22:56.681165 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerStarted","Data":"c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520"} Nov 28 22:22:56 crc kubenswrapper[4957]: I1128 22:22:56.711133 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wqc4d" podStartSLOduration=3.209128723 podStartE2EDuration="5.711111424s" podCreationTimestamp="2025-11-28 22:22:51 +0000 UTC" firstStartedPulling="2025-11-28 22:22:53.633330617 +0000 UTC m=+5613.101978526" lastFinishedPulling="2025-11-28 22:22:56.135313318 +0000 UTC m=+5615.603961227" observedRunningTime="2025-11-28 22:22:56.698924743 +0000 UTC m=+5616.167572652" watchObservedRunningTime="2025-11-28 22:22:56.711111424 +0000 UTC m=+5616.179759333" Nov 28 22:23:02 crc kubenswrapper[4957]: I1128 22:23:02.603382 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:23:02 crc kubenswrapper[4957]: I1128 22:23:02.604671 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:23:02 crc kubenswrapper[4957]: I1128 22:23:02.656336 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:23:02 crc kubenswrapper[4957]: I1128 22:23:02.799990 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:23:02 crc kubenswrapper[4957]: I1128 22:23:02.897709 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqc4d"] Nov 28 22:23:04 crc kubenswrapper[4957]: I1128 22:23:04.764734 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wqc4d" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="registry-server" containerID="cri-o://c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520" gracePeriod=2 Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.273516 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.320957 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-utilities\") pod \"fc66531a-807f-448c-8946-4045319b238c\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.321026 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s88hp\" (UniqueName: \"kubernetes.io/projected/fc66531a-807f-448c-8946-4045319b238c-kube-api-access-s88hp\") pod \"fc66531a-807f-448c-8946-4045319b238c\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.321066 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-catalog-content\") pod \"fc66531a-807f-448c-8946-4045319b238c\" (UID: \"fc66531a-807f-448c-8946-4045319b238c\") " Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.322169 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-utilities" (OuterVolumeSpecName: "utilities") pod "fc66531a-807f-448c-8946-4045319b238c" (UID: "fc66531a-807f-448c-8946-4045319b238c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.326556 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc66531a-807f-448c-8946-4045319b238c-kube-api-access-s88hp" (OuterVolumeSpecName: "kube-api-access-s88hp") pod "fc66531a-807f-448c-8946-4045319b238c" (UID: "fc66531a-807f-448c-8946-4045319b238c"). InnerVolumeSpecName "kube-api-access-s88hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.338699 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc66531a-807f-448c-8946-4045319b238c" (UID: "fc66531a-807f-448c-8946-4045319b238c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.423295 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.423329 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s88hp\" (UniqueName: \"kubernetes.io/projected/fc66531a-807f-448c-8946-4045319b238c-kube-api-access-s88hp\") on node \"crc\" DevicePath \"\"" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.423340 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc66531a-807f-448c-8946-4045319b238c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.776635 4957 generic.go:334] "Generic (PLEG): container finished" podID="fc66531a-807f-448c-8946-4045319b238c" containerID="c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520" exitCode=0 Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.776705 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqc4d" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.776727 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerDied","Data":"c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520"} Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.777301 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqc4d" event={"ID":"fc66531a-807f-448c-8946-4045319b238c","Type":"ContainerDied","Data":"dd42b12527ab788df01f63e0cce9b36f3ec9382cf44349efa150cf640e550af9"} Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.777324 4957 scope.go:117] "RemoveContainer" containerID="c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.807536 4957 scope.go:117] "RemoveContainer" containerID="167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.830599 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqc4d"] Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.848168 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqc4d"] Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.848457 4957 scope.go:117] "RemoveContainer" containerID="11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.900993 4957 scope.go:117] "RemoveContainer" containerID="c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520" Nov 28 22:23:05 crc kubenswrapper[4957]: E1128 22:23:05.901537 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520\": container with ID starting with c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520 not found: ID does not exist" containerID="c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.901587 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520"} err="failed to get container status \"c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520\": rpc error: code = NotFound desc = could not find container \"c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520\": container with ID starting with c3a52305c2b51c44b41ba29f640744ff4dfccd1ad2549c875989e94cf3860520 not found: ID does not exist" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.901626 4957 scope.go:117] "RemoveContainer" containerID="167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d" Nov 28 22:23:05 crc kubenswrapper[4957]: E1128 22:23:05.901964 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d\": container with ID starting with 167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d not found: ID does not exist" containerID="167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.901994 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d"} err="failed to get container status \"167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d\": rpc error: code = NotFound desc = could not find container \"167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d\": container with ID starting with 167237d21fd236fc047ea5a0ad99830b8f215717cd5d3e003f2b603ddf304e4d not found: ID does not exist" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.902017 4957 scope.go:117] "RemoveContainer" containerID="11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f" Nov 28 22:23:05 crc kubenswrapper[4957]: E1128 22:23:05.902310 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f\": container with ID starting with 11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f not found: ID does not exist" containerID="11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f" Nov 28 22:23:05 crc kubenswrapper[4957]: I1128 22:23:05.902340 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f"} err="failed to get container status \"11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f\": rpc error: code = NotFound desc = could not find container \"11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f\": container with ID starting with 11a10761b73be7688ac23fe8fd170828d531215ae38c7c9ce35cbfc2a557c14f not found: ID does not exist" Nov 28 22:23:06 crc kubenswrapper[4957]: I1128 22:23:06.825398 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc66531a-807f-448c-8946-4045319b238c" path="/var/lib/kubelet/pods/fc66531a-807f-448c-8946-4045319b238c/volumes" Nov 28 22:25:08 crc kubenswrapper[4957]: I1128 22:25:08.992732 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:25:08 crc kubenswrapper[4957]: I1128 22:25:08.993455 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:25:38 crc kubenswrapper[4957]: I1128 22:25:38.992752 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:25:38 crc kubenswrapper[4957]: I1128 22:25:38.993408 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:26:01 crc kubenswrapper[4957]: I1128 22:26:01.633708 4957 generic.go:334] "Generic (PLEG): container finished" podID="d0047755-5ddc-48c8-a4eb-4bf540cb695f" containerID="139d81dab751f5bcb2bdcb1152d9b2f84c4a175deb77e611b29c9f87974da598" exitCode=0 Nov 28 22:26:01 crc kubenswrapper[4957]: I1128 22:26:01.633830 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d0047755-5ddc-48c8-a4eb-4bf540cb695f","Type":"ContainerDied","Data":"139d81dab751f5bcb2bdcb1152d9b2f84c4a175deb77e611b29c9f87974da598"} Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.055795 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.246940 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9jj\" (UniqueName: \"kubernetes.io/projected/d0047755-5ddc-48c8-a4eb-4bf540cb695f-kube-api-access-9q9jj\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247004 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ca-certs\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247036 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-config-data\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247059 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247193 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ssh-key\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247279 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config-secret\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247306 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-temporary\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247367 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.247438 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-workdir\") pod \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\" (UID: \"d0047755-5ddc-48c8-a4eb-4bf540cb695f\") " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.248050 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-config-data" (OuterVolumeSpecName: "config-data") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.249278 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.254382 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0047755-5ddc-48c8-a4eb-4bf540cb695f-kube-api-access-9q9jj" (OuterVolumeSpecName: "kube-api-access-9q9jj") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "kube-api-access-9q9jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.254900 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.255118 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.284452 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.284620 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.286201 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.313301 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d0047755-5ddc-48c8-a4eb-4bf540cb695f" (UID: "d0047755-5ddc-48c8-a4eb-4bf540cb695f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.352090 4957 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.354612 4957 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355321 4957 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355440 4957 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355535 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355616 4957 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355698 4957 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0047755-5ddc-48c8-a4eb-4bf540cb695f-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355801 4957 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d0047755-5ddc-48c8-a4eb-4bf540cb695f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.355891 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9jj\" (UniqueName: \"kubernetes.io/projected/d0047755-5ddc-48c8-a4eb-4bf540cb695f-kube-api-access-9q9jj\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.400774 4957 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.457179 4957 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.656403 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d0047755-5ddc-48c8-a4eb-4bf540cb695f","Type":"ContainerDied","Data":"2c36875f04fb6b924af92d13ec13a6943d04b2db97606c7fe6d3463cfdc5496c"} Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.656649 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c36875f04fb6b924af92d13ec13a6943d04b2db97606c7fe6d3463cfdc5496c" Nov 28 22:26:03 crc kubenswrapper[4957]: I1128 22:26:03.656485 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.850443 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-whp5c"] Nov 28 22:26:06 crc kubenswrapper[4957]: E1128 22:26:06.858087 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="extract-content" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.858110 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="extract-content" Nov 28 22:26:06 crc kubenswrapper[4957]: E1128 22:26:06.858140 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0047755-5ddc-48c8-a4eb-4bf540cb695f" containerName="tempest-tests-tempest-tests-runner" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.858149 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0047755-5ddc-48c8-a4eb-4bf540cb695f" containerName="tempest-tests-tempest-tests-runner" Nov 28 22:26:06 crc kubenswrapper[4957]: E1128 22:26:06.858174 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="extract-utilities" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.858182 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="extract-utilities" Nov 28 22:26:06 crc kubenswrapper[4957]: E1128 22:26:06.858202 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="registry-server" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.858258 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="registry-server" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.858663 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0047755-5ddc-48c8-a4eb-4bf540cb695f" containerName="tempest-tests-tempest-tests-runner" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.858688 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc66531a-807f-448c-8946-4045319b238c" containerName="registry-server" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.860871 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:06 crc kubenswrapper[4957]: I1128 22:26:06.866382 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whp5c"] Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.040825 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-catalog-content\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.041326 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv26g\" (UniqueName: \"kubernetes.io/projected/ac5f2019-223e-4145-bca6-1a0e7ad222ef-kube-api-access-sv26g\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.041665 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-utilities\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.144059 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-catalog-content\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.144196 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv26g\" (UniqueName: \"kubernetes.io/projected/ac5f2019-223e-4145-bca6-1a0e7ad222ef-kube-api-access-sv26g\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.144322 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-utilities\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.144643 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-catalog-content\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.144840 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-utilities\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.164045 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv26g\" (UniqueName: \"kubernetes.io/projected/ac5f2019-223e-4145-bca6-1a0e7ad222ef-kube-api-access-sv26g\") pod \"redhat-operators-whp5c\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.186470 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.676829 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whp5c"] Nov 28 22:26:07 crc kubenswrapper[4957]: I1128 22:26:07.705888 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerStarted","Data":"27c2eeef567e7723420b68f3493c4f148b51c093fe597125a66ea00e9fa3d739"} Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.720885 4957 generic.go:334] "Generic (PLEG): container finished" podID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerID="a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc" exitCode=0 Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.720943 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerDied","Data":"a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc"} Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.724343 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.992249 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.992302 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.992341 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.992938 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcd3913b7c5339e971d93a9d0a6c9bbba13a49ba25d26a2d0a0084811e529b74"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:26:08 crc kubenswrapper[4957]: I1128 22:26:08.992995 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://dcd3913b7c5339e971d93a9d0a6c9bbba13a49ba25d26a2d0a0084811e529b74" gracePeriod=600 Nov 28 22:26:09 crc kubenswrapper[4957]: I1128 22:26:09.734865 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerStarted","Data":"e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d"} Nov 28 22:26:09 crc kubenswrapper[4957]: I1128 22:26:09.738187 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="dcd3913b7c5339e971d93a9d0a6c9bbba13a49ba25d26a2d0a0084811e529b74" exitCode=0 Nov 28 22:26:09 crc kubenswrapper[4957]: I1128 22:26:09.738242 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"dcd3913b7c5339e971d93a9d0a6c9bbba13a49ba25d26a2d0a0084811e529b74"} Nov 28 22:26:09 crc kubenswrapper[4957]: I1128 22:26:09.738276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228"} Nov 28 22:26:09 crc kubenswrapper[4957]: I1128 22:26:09.738293 4957 scope.go:117] "RemoveContainer" containerID="ef0a4857e9623421f42f226c8933e6ba4ac4635ea91ee76ccdc87f2d058c5ebf" Nov 28 22:26:12 crc kubenswrapper[4957]: I1128 22:26:12.778796 4957 generic.go:334] "Generic (PLEG): container finished" podID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerID="e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d" exitCode=0 Nov 28 22:26:12 crc kubenswrapper[4957]: I1128 22:26:12.778855 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerDied","Data":"e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d"} Nov 28 22:26:13 crc kubenswrapper[4957]: I1128 22:26:13.795268 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerStarted","Data":"ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab"} Nov 28 22:26:13 crc kubenswrapper[4957]: I1128 22:26:13.827036 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-whp5c" podStartSLOduration=3.369285011 podStartE2EDuration="7.82701425s" podCreationTimestamp="2025-11-28 22:26:06 +0000 UTC" firstStartedPulling="2025-11-28 22:26:08.72402173 +0000 UTC m=+5808.192669649" lastFinishedPulling="2025-11-28 22:26:13.181750969 +0000 UTC m=+5812.650398888" observedRunningTime="2025-11-28 22:26:13.814098931 +0000 UTC m=+5813.282746840" watchObservedRunningTime="2025-11-28 22:26:13.82701425 +0000 UTC m=+5813.295662149" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.182155 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.184278 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.187554 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jg9m9" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.201626 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.334886 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9ln\" (UniqueName: \"kubernetes.io/projected/c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb-kube-api-access-xd9ln\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.335441 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.437583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.437694 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9ln\" (UniqueName: \"kubernetes.io/projected/c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb-kube-api-access-xd9ln\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.438432 4957 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.460079 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9ln\" (UniqueName: \"kubernetes.io/projected/c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb-kube-api-access-xd9ln\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.477294 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.508139 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 22:26:14 crc kubenswrapper[4957]: I1128 22:26:14.940106 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 22:26:14 crc kubenswrapper[4957]: W1128 22:26:14.940820 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc48c0f07_ec0e_4f0e_9c97_b47fe68e5cdb.slice/crio-a672390ddd52a0208f562ad6eacef568410dc414a5df8a707d214e207a9da325 WatchSource:0}: Error finding container a672390ddd52a0208f562ad6eacef568410dc414a5df8a707d214e207a9da325: Status 404 returned error can't find the container with id a672390ddd52a0208f562ad6eacef568410dc414a5df8a707d214e207a9da325 Nov 28 22:26:15 crc kubenswrapper[4957]: I1128 22:26:15.817416 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb","Type":"ContainerStarted","Data":"a672390ddd52a0208f562ad6eacef568410dc414a5df8a707d214e207a9da325"} Nov 28 22:26:16 crc kubenswrapper[4957]: I1128 22:26:16.830190 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb","Type":"ContainerStarted","Data":"fe8d3ac7183ced90751820c2bb2f2297ca41e08f805b50ffe7b9cf1b4306e036"} Nov 28 22:26:16 crc kubenswrapper[4957]: I1128 22:26:16.844174 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.888079429 podStartE2EDuration="2.844155988s" podCreationTimestamp="2025-11-28 22:26:14 +0000 UTC" firstStartedPulling="2025-11-28 22:26:14.944016616 +0000 UTC m=+5814.412664525" lastFinishedPulling="2025-11-28 22:26:15.900093165 +0000 UTC m=+5815.368741084" observedRunningTime="2025-11-28 22:26:16.841531943 +0000 UTC m=+5816.310179852" watchObservedRunningTime="2025-11-28 22:26:16.844155988 +0000 UTC m=+5816.312803907" Nov 28 22:26:17 crc kubenswrapper[4957]: I1128 22:26:17.187128 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:17 crc kubenswrapper[4957]: I1128 22:26:17.187181 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:18 crc kubenswrapper[4957]: I1128 22:26:18.236699 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whp5c" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="registry-server" probeResult="failure" output=< Nov 28 22:26:18 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 22:26:18 crc kubenswrapper[4957]: > Nov 28 22:26:27 crc kubenswrapper[4957]: I1128 22:26:27.233555 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:27 crc kubenswrapper[4957]: I1128 22:26:27.280836 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:27 crc kubenswrapper[4957]: I1128 22:26:27.474611 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whp5c"] Nov 28 22:26:28 crc kubenswrapper[4957]: I1128 22:26:28.981421 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-whp5c" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="registry-server" containerID="cri-o://ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab" gracePeriod=2 Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.470316 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.592431 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv26g\" (UniqueName: \"kubernetes.io/projected/ac5f2019-223e-4145-bca6-1a0e7ad222ef-kube-api-access-sv26g\") pod \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.592861 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-catalog-content\") pod \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.592927 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-utilities\") pod \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\" (UID: \"ac5f2019-223e-4145-bca6-1a0e7ad222ef\") " Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.593669 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-utilities" (OuterVolumeSpecName: "utilities") pod "ac5f2019-223e-4145-bca6-1a0e7ad222ef" (UID: "ac5f2019-223e-4145-bca6-1a0e7ad222ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.600302 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5f2019-223e-4145-bca6-1a0e7ad222ef-kube-api-access-sv26g" (OuterVolumeSpecName: "kube-api-access-sv26g") pod "ac5f2019-223e-4145-bca6-1a0e7ad222ef" (UID: "ac5f2019-223e-4145-bca6-1a0e7ad222ef"). InnerVolumeSpecName "kube-api-access-sv26g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.695731 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.696003 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv26g\" (UniqueName: \"kubernetes.io/projected/ac5f2019-223e-4145-bca6-1a0e7ad222ef-kube-api-access-sv26g\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.721024 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac5f2019-223e-4145-bca6-1a0e7ad222ef" (UID: "ac5f2019-223e-4145-bca6-1a0e7ad222ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.799267 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5f2019-223e-4145-bca6-1a0e7ad222ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.994629 4957 generic.go:334] "Generic (PLEG): container finished" podID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerID="ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab" exitCode=0 Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.994791 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerDied","Data":"ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab"} Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.995447 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whp5c" event={"ID":"ac5f2019-223e-4145-bca6-1a0e7ad222ef","Type":"ContainerDied","Data":"27c2eeef567e7723420b68f3493c4f148b51c093fe597125a66ea00e9fa3d739"} Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.995482 4957 scope.go:117] "RemoveContainer" containerID="ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab" Nov 28 22:26:29 crc kubenswrapper[4957]: I1128 22:26:29.994883 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whp5c" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.025848 4957 scope.go:117] "RemoveContainer" containerID="e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.041676 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whp5c"] Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.052893 4957 scope.go:117] "RemoveContainer" containerID="a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.058018 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-whp5c"] Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.102135 4957 scope.go:117] "RemoveContainer" containerID="ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab" Nov 28 22:26:30 crc kubenswrapper[4957]: E1128 22:26:30.102645 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab\": container with ID starting with ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab not found: ID does not exist" containerID="ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.102762 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab"} err="failed to get container status \"ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab\": rpc error: code = NotFound desc = could not find container \"ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab\": container with ID starting with ec3c8958911bed43fc9b51cf7be42509f25fa2c1efb3aa02df4ca05eb24fafab not found: ID does not exist" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.102878 4957 scope.go:117] "RemoveContainer" containerID="e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d" Nov 28 22:26:30 crc kubenswrapper[4957]: E1128 22:26:30.103407 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d\": container with ID starting with e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d not found: ID does not exist" containerID="e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.103452 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d"} err="failed to get container status \"e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d\": rpc error: code = NotFound desc = could not find container \"e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d\": container with ID starting with e286ca27ffa37c3a9fbd6253c3907586b7606c07dc77638fb26682af1259a83d not found: ID does not exist" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.103489 4957 scope.go:117] "RemoveContainer" containerID="a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc" Nov 28 22:26:30 crc kubenswrapper[4957]: E1128 22:26:30.103798 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc\": container with ID starting with a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc not found: ID does not exist" containerID="a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.103835 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc"} err="failed to get container status \"a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc\": rpc error: code = NotFound desc = could not find container \"a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc\": container with ID starting with a01c9075de5df6d9a152fb8429ea2e747c8f16a0b671d7888ffad636a474b8dc not found: ID does not exist" Nov 28 22:26:30 crc kubenswrapper[4957]: I1128 22:26:30.829406 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" path="/var/lib/kubelet/pods/ac5f2019-223e-4145-bca6-1a0e7ad222ef/volumes" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.196796 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-krdfw/must-gather-9j9tl"] Nov 28 22:26:47 crc kubenswrapper[4957]: E1128 22:26:47.198589 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="extract-content" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.198603 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="extract-content" Nov 28 22:26:47 crc kubenswrapper[4957]: E1128 22:26:47.198626 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="registry-server" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.198631 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="registry-server" Nov 28 22:26:47 crc kubenswrapper[4957]: E1128 22:26:47.198661 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="extract-utilities" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.198667 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="extract-utilities" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.198876 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5f2019-223e-4145-bca6-1a0e7ad222ef" containerName="registry-server" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.200016 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.202476 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-krdfw"/"kube-root-ca.crt" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.202932 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-krdfw"/"default-dockercfg-cw99q" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.203011 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-krdfw"/"openshift-service-ca.crt" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.218333 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-krdfw/must-gather-9j9tl"] Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.316288 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffjs\" (UniqueName: \"kubernetes.io/projected/62ed8424-d08d-4c25-b34c-aae91b72378d-kube-api-access-bffjs\") pod \"must-gather-9j9tl\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.316365 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62ed8424-d08d-4c25-b34c-aae91b72378d-must-gather-output\") pod \"must-gather-9j9tl\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.418857 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bffjs\" (UniqueName: \"kubernetes.io/projected/62ed8424-d08d-4c25-b34c-aae91b72378d-kube-api-access-bffjs\") pod \"must-gather-9j9tl\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.419398 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62ed8424-d08d-4c25-b34c-aae91b72378d-must-gather-output\") pod \"must-gather-9j9tl\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.419997 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62ed8424-d08d-4c25-b34c-aae91b72378d-must-gather-output\") pod \"must-gather-9j9tl\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.450033 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bffjs\" (UniqueName: \"kubernetes.io/projected/62ed8424-d08d-4c25-b34c-aae91b72378d-kube-api-access-bffjs\") pod \"must-gather-9j9tl\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:47 crc kubenswrapper[4957]: I1128 22:26:47.521250 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:26:48 crc kubenswrapper[4957]: I1128 22:26:48.003418 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-krdfw/must-gather-9j9tl"] Nov 28 22:26:48 crc kubenswrapper[4957]: I1128 22:26:48.201575 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/must-gather-9j9tl" event={"ID":"62ed8424-d08d-4c25-b34c-aae91b72378d","Type":"ContainerStarted","Data":"2e60d256723a401fa01c2073d889cdf7663cb164b00cee9d78b41e0769802eb0"} Nov 28 22:26:54 crc kubenswrapper[4957]: I1128 22:26:54.267661 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/must-gather-9j9tl" event={"ID":"62ed8424-d08d-4c25-b34c-aae91b72378d","Type":"ContainerStarted","Data":"752697841de6889fd13ca8e4737b02f4f37769b3aa245a7d769aee2ca67d8010"} Nov 28 22:26:54 crc kubenswrapper[4957]: I1128 22:26:54.268309 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/must-gather-9j9tl" event={"ID":"62ed8424-d08d-4c25-b34c-aae91b72378d","Type":"ContainerStarted","Data":"3a08321aebb1a18356caeb49687cf00cc868531f530a994158c0b86360ce6ba9"} Nov 28 22:26:54 crc kubenswrapper[4957]: I1128 22:26:54.285909 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-krdfw/must-gather-9j9tl" podStartSLOduration=1.8403165910000001 podStartE2EDuration="7.285891447s" podCreationTimestamp="2025-11-28 22:26:47 +0000 UTC" firstStartedPulling="2025-11-28 22:26:48.019540824 +0000 UTC m=+5847.488188733" lastFinishedPulling="2025-11-28 22:26:53.46511568 +0000 UTC m=+5852.933763589" observedRunningTime="2025-11-28 22:26:54.280694029 +0000 UTC m=+5853.749341938" watchObservedRunningTime="2025-11-28 22:26:54.285891447 +0000 UTC m=+5853.754539346" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.653955 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-krdfw/crc-debug-ctlr8"] Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.656560 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.754721 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ef5159-d94f-4357-83be-45db5048dd14-host\") pod \"crc-debug-ctlr8\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.754776 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmg7v\" (UniqueName: \"kubernetes.io/projected/62ef5159-d94f-4357-83be-45db5048dd14-kube-api-access-pmg7v\") pod \"crc-debug-ctlr8\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.856879 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmg7v\" (UniqueName: \"kubernetes.io/projected/62ef5159-d94f-4357-83be-45db5048dd14-kube-api-access-pmg7v\") pod \"crc-debug-ctlr8\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.857190 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ef5159-d94f-4357-83be-45db5048dd14-host\") pod \"crc-debug-ctlr8\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.857580 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ef5159-d94f-4357-83be-45db5048dd14-host\") pod \"crc-debug-ctlr8\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.877725 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmg7v\" (UniqueName: \"kubernetes.io/projected/62ef5159-d94f-4357-83be-45db5048dd14-kube-api-access-pmg7v\") pod \"crc-debug-ctlr8\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:57 crc kubenswrapper[4957]: I1128 22:26:57.977628 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:26:58 crc kubenswrapper[4957]: W1128 22:26:58.014923 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62ef5159_d94f_4357_83be_45db5048dd14.slice/crio-9132c8cbd5c8db5f29abca5c40a14ea2ef51714f0d64092e4a629ccabf3a4baf WatchSource:0}: Error finding container 9132c8cbd5c8db5f29abca5c40a14ea2ef51714f0d64092e4a629ccabf3a4baf: Status 404 returned error can't find the container with id 9132c8cbd5c8db5f29abca5c40a14ea2ef51714f0d64092e4a629ccabf3a4baf Nov 28 22:26:58 crc kubenswrapper[4957]: I1128 22:26:58.307685 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" event={"ID":"62ef5159-d94f-4357-83be-45db5048dd14","Type":"ContainerStarted","Data":"9132c8cbd5c8db5f29abca5c40a14ea2ef51714f0d64092e4a629ccabf3a4baf"} Nov 28 22:27:10 crc kubenswrapper[4957]: I1128 22:27:10.445809 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" event={"ID":"62ef5159-d94f-4357-83be-45db5048dd14","Type":"ContainerStarted","Data":"9f38608e1f8c2710094d89a50b418fc42d72bfba0684118f295f9879a1143e7c"} Nov 28 22:27:10 crc kubenswrapper[4957]: I1128 22:27:10.466490 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" podStartSLOduration=2.091960281 podStartE2EDuration="13.466471517s" podCreationTimestamp="2025-11-28 22:26:57 +0000 UTC" firstStartedPulling="2025-11-28 22:26:58.019378223 +0000 UTC m=+5857.488026132" lastFinishedPulling="2025-11-28 22:27:09.393889459 +0000 UTC m=+5868.862537368" observedRunningTime="2025-11-28 22:27:10.458603953 +0000 UTC m=+5869.927251862" watchObservedRunningTime="2025-11-28 22:27:10.466471517 +0000 UTC m=+5869.935119416" Nov 28 22:27:53 crc kubenswrapper[4957]: I1128 22:27:53.910159 4957 generic.go:334] "Generic (PLEG): container finished" podID="62ef5159-d94f-4357-83be-45db5048dd14" containerID="9f38608e1f8c2710094d89a50b418fc42d72bfba0684118f295f9879a1143e7c" exitCode=0 Nov 28 22:27:53 crc kubenswrapper[4957]: I1128 22:27:53.910273 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" event={"ID":"62ef5159-d94f-4357-83be-45db5048dd14","Type":"ContainerDied","Data":"9f38608e1f8c2710094d89a50b418fc42d72bfba0684118f295f9879a1143e7c"} Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.057836 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.093252 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-krdfw/crc-debug-ctlr8"] Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.102329 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-krdfw/crc-debug-ctlr8"] Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.183403 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ef5159-d94f-4357-83be-45db5048dd14-host\") pod \"62ef5159-d94f-4357-83be-45db5048dd14\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.183607 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmg7v\" (UniqueName: \"kubernetes.io/projected/62ef5159-d94f-4357-83be-45db5048dd14-kube-api-access-pmg7v\") pod \"62ef5159-d94f-4357-83be-45db5048dd14\" (UID: \"62ef5159-d94f-4357-83be-45db5048dd14\") " Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.184004 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62ef5159-d94f-4357-83be-45db5048dd14-host" (OuterVolumeSpecName: "host") pod "62ef5159-d94f-4357-83be-45db5048dd14" (UID: "62ef5159-d94f-4357-83be-45db5048dd14"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.184342 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ef5159-d94f-4357-83be-45db5048dd14-host\") on node \"crc\" DevicePath \"\"" Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.190164 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ef5159-d94f-4357-83be-45db5048dd14-kube-api-access-pmg7v" (OuterVolumeSpecName: "kube-api-access-pmg7v") pod "62ef5159-d94f-4357-83be-45db5048dd14" (UID: "62ef5159-d94f-4357-83be-45db5048dd14"). InnerVolumeSpecName "kube-api-access-pmg7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.286311 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmg7v\" (UniqueName: \"kubernetes.io/projected/62ef5159-d94f-4357-83be-45db5048dd14-kube-api-access-pmg7v\") on node \"crc\" DevicePath \"\"" Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.946930 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9132c8cbd5c8db5f29abca5c40a14ea2ef51714f0d64092e4a629ccabf3a4baf" Nov 28 22:27:55 crc kubenswrapper[4957]: I1128 22:27:55.946963 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-ctlr8" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.285623 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-krdfw/crc-debug-vhckk"] Nov 28 22:27:56 crc kubenswrapper[4957]: E1128 22:27:56.286474 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ef5159-d94f-4357-83be-45db5048dd14" containerName="container-00" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.286497 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ef5159-d94f-4357-83be-45db5048dd14" containerName="container-00" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.287019 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ef5159-d94f-4357-83be-45db5048dd14" containerName="container-00" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.288345 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.411103 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-host\") pod \"crc-debug-vhckk\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.411442 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsw2f\" (UniqueName: \"kubernetes.io/projected/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-kube-api-access-jsw2f\") pod \"crc-debug-vhckk\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.514097 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-host\") pod \"crc-debug-vhckk\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.514243 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsw2f\" (UniqueName: \"kubernetes.io/projected/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-kube-api-access-jsw2f\") pod \"crc-debug-vhckk\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.514489 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-host\") pod \"crc-debug-vhckk\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.539191 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsw2f\" (UniqueName: \"kubernetes.io/projected/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-kube-api-access-jsw2f\") pod \"crc-debug-vhckk\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.613263 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.825403 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ef5159-d94f-4357-83be-45db5048dd14" path="/var/lib/kubelet/pods/62ef5159-d94f-4357-83be-45db5048dd14/volumes" Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.959741 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-vhckk" event={"ID":"9a75fc6b-70eb-4ad0-8cc5-7432296864d9","Type":"ContainerStarted","Data":"f83b80b4699e1fc3a85f68c6adb06a271c897bfd8d08e3e393259a7b06ac17b9"} Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.960392 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-vhckk" event={"ID":"9a75fc6b-70eb-4ad0-8cc5-7432296864d9","Type":"ContainerStarted","Data":"0b31a09e275276c3ca3d12428276c4191075d4fcd77e3d65b9a9fc3d2b677fc9"} Nov 28 22:27:56 crc kubenswrapper[4957]: I1128 22:27:56.986724 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-krdfw/crc-debug-vhckk" podStartSLOduration=0.986679097 podStartE2EDuration="986.679097ms" podCreationTimestamp="2025-11-28 22:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 22:27:56.978631249 +0000 UTC m=+5916.447279158" watchObservedRunningTime="2025-11-28 22:27:56.986679097 +0000 UTC m=+5916.455326996" Nov 28 22:27:57 crc kubenswrapper[4957]: I1128 22:27:57.972462 4957 generic.go:334] "Generic (PLEG): container finished" podID="9a75fc6b-70eb-4ad0-8cc5-7432296864d9" containerID="f83b80b4699e1fc3a85f68c6adb06a271c897bfd8d08e3e393259a7b06ac17b9" exitCode=0 Nov 28 22:27:57 crc kubenswrapper[4957]: I1128 22:27:57.972488 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-vhckk" event={"ID":"9a75fc6b-70eb-4ad0-8cc5-7432296864d9","Type":"ContainerDied","Data":"f83b80b4699e1fc3a85f68c6adb06a271c897bfd8d08e3e393259a7b06ac17b9"} Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.105884 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.268261 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsw2f\" (UniqueName: \"kubernetes.io/projected/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-kube-api-access-jsw2f\") pod \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.268355 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-host\") pod \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\" (UID: \"9a75fc6b-70eb-4ad0-8cc5-7432296864d9\") " Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.268666 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-host" (OuterVolumeSpecName: "host") pod "9a75fc6b-70eb-4ad0-8cc5-7432296864d9" (UID: "9a75fc6b-70eb-4ad0-8cc5-7432296864d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.270450 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-host\") on node \"crc\" DevicePath \"\"" Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.281667 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-kube-api-access-jsw2f" (OuterVolumeSpecName: "kube-api-access-jsw2f") pod "9a75fc6b-70eb-4ad0-8cc5-7432296864d9" (UID: "9a75fc6b-70eb-4ad0-8cc5-7432296864d9"). InnerVolumeSpecName "kube-api-access-jsw2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.373274 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsw2f\" (UniqueName: \"kubernetes.io/projected/9a75fc6b-70eb-4ad0-8cc5-7432296864d9-kube-api-access-jsw2f\") on node \"crc\" DevicePath \"\"" Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.738635 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-krdfw/crc-debug-vhckk"] Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.748647 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-krdfw/crc-debug-vhckk"] Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.994954 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b31a09e275276c3ca3d12428276c4191075d4fcd77e3d65b9a9fc3d2b677fc9" Nov 28 22:27:59 crc kubenswrapper[4957]: I1128 22:27:59.995170 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-vhckk" Nov 28 22:28:00 crc kubenswrapper[4957]: I1128 22:28:00.838141 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a75fc6b-70eb-4ad0-8cc5-7432296864d9" path="/var/lib/kubelet/pods/9a75fc6b-70eb-4ad0-8cc5-7432296864d9/volumes" Nov 28 22:28:00 crc kubenswrapper[4957]: I1128 22:28:00.914357 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-krdfw/crc-debug-wtq6b"] Nov 28 22:28:00 crc kubenswrapper[4957]: E1128 22:28:00.914960 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a75fc6b-70eb-4ad0-8cc5-7432296864d9" containerName="container-00" Nov 28 22:28:00 crc kubenswrapper[4957]: I1128 22:28:00.914988 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a75fc6b-70eb-4ad0-8cc5-7432296864d9" containerName="container-00" Nov 28 22:28:00 crc kubenswrapper[4957]: I1128 22:28:00.915761 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a75fc6b-70eb-4ad0-8cc5-7432296864d9" containerName="container-00" Nov 28 22:28:00 crc kubenswrapper[4957]: I1128 22:28:00.917042 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.010451 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77005c62-b51f-48be-8ef3-1a4b488b712d-host\") pod \"crc-debug-wtq6b\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.010791 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rc6\" (UniqueName: \"kubernetes.io/projected/77005c62-b51f-48be-8ef3-1a4b488b712d-kube-api-access-s5rc6\") pod \"crc-debug-wtq6b\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.114086 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77005c62-b51f-48be-8ef3-1a4b488b712d-host\") pod \"crc-debug-wtq6b\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.114261 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77005c62-b51f-48be-8ef3-1a4b488b712d-host\") pod \"crc-debug-wtq6b\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.114287 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rc6\" (UniqueName: \"kubernetes.io/projected/77005c62-b51f-48be-8ef3-1a4b488b712d-kube-api-access-s5rc6\") pod \"crc-debug-wtq6b\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.135553 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rc6\" (UniqueName: \"kubernetes.io/projected/77005c62-b51f-48be-8ef3-1a4b488b712d-kube-api-access-s5rc6\") pod \"crc-debug-wtq6b\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: I1128 22:28:01.243814 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:01 crc kubenswrapper[4957]: W1128 22:28:01.282262 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77005c62_b51f_48be_8ef3_1a4b488b712d.slice/crio-da2c2639a209724c6895084de4cd927593c3d442f3b28af5949698102b7a3f16 WatchSource:0}: Error finding container da2c2639a209724c6895084de4cd927593c3d442f3b28af5949698102b7a3f16: Status 404 returned error can't find the container with id da2c2639a209724c6895084de4cd927593c3d442f3b28af5949698102b7a3f16 Nov 28 22:28:02 crc kubenswrapper[4957]: I1128 22:28:02.013243 4957 generic.go:334] "Generic (PLEG): container finished" podID="77005c62-b51f-48be-8ef3-1a4b488b712d" containerID="f3d207fd2a0714e5f74dbfb7621e02826e6efb394438b761ea4b45915f6781ce" exitCode=0 Nov 28 22:28:02 crc kubenswrapper[4957]: I1128 22:28:02.013490 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-wtq6b" event={"ID":"77005c62-b51f-48be-8ef3-1a4b488b712d","Type":"ContainerDied","Data":"f3d207fd2a0714e5f74dbfb7621e02826e6efb394438b761ea4b45915f6781ce"} Nov 28 22:28:02 crc kubenswrapper[4957]: I1128 22:28:02.013549 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/crc-debug-wtq6b" event={"ID":"77005c62-b51f-48be-8ef3-1a4b488b712d","Type":"ContainerStarted","Data":"da2c2639a209724c6895084de4cd927593c3d442f3b28af5949698102b7a3f16"} Nov 28 22:28:02 crc kubenswrapper[4957]: I1128 22:28:02.052670 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-krdfw/crc-debug-wtq6b"] Nov 28 22:28:02 crc kubenswrapper[4957]: I1128 22:28:02.062928 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-krdfw/crc-debug-wtq6b"] Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.154927 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.265952 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5rc6\" (UniqueName: \"kubernetes.io/projected/77005c62-b51f-48be-8ef3-1a4b488b712d-kube-api-access-s5rc6\") pod \"77005c62-b51f-48be-8ef3-1a4b488b712d\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.266221 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77005c62-b51f-48be-8ef3-1a4b488b712d-host\") pod \"77005c62-b51f-48be-8ef3-1a4b488b712d\" (UID: \"77005c62-b51f-48be-8ef3-1a4b488b712d\") " Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.266949 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77005c62-b51f-48be-8ef3-1a4b488b712d-host" (OuterVolumeSpecName: "host") pod "77005c62-b51f-48be-8ef3-1a4b488b712d" (UID: "77005c62-b51f-48be-8ef3-1a4b488b712d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.271711 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77005c62-b51f-48be-8ef3-1a4b488b712d-kube-api-access-s5rc6" (OuterVolumeSpecName: "kube-api-access-s5rc6") pod "77005c62-b51f-48be-8ef3-1a4b488b712d" (UID: "77005c62-b51f-48be-8ef3-1a4b488b712d"). InnerVolumeSpecName "kube-api-access-s5rc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.302198 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hfcnj"] Nov 28 22:28:03 crc kubenswrapper[4957]: E1128 22:28:03.302771 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77005c62-b51f-48be-8ef3-1a4b488b712d" containerName="container-00" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.302790 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="77005c62-b51f-48be-8ef3-1a4b488b712d" containerName="container-00" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.303053 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="77005c62-b51f-48be-8ef3-1a4b488b712d" containerName="container-00" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.305026 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.313758 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hfcnj"] Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.369268 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77005c62-b51f-48be-8ef3-1a4b488b712d-host\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.369549 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5rc6\" (UniqueName: \"kubernetes.io/projected/77005c62-b51f-48be-8ef3-1a4b488b712d-kube-api-access-s5rc6\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.471517 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-catalog-content\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.471662 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p999l\" (UniqueName: \"kubernetes.io/projected/f7875799-8435-4b77-a6bb-1a5a430d1b95-kube-api-access-p999l\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.471746 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-utilities\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.573479 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-catalog-content\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.573595 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p999l\" (UniqueName: \"kubernetes.io/projected/f7875799-8435-4b77-a6bb-1a5a430d1b95-kube-api-access-p999l\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.573765 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-utilities\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.574077 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-catalog-content\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.574121 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-utilities\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.592719 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p999l\" (UniqueName: \"kubernetes.io/projected/f7875799-8435-4b77-a6bb-1a5a430d1b95-kube-api-access-p999l\") pod \"certified-operators-hfcnj\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:03 crc kubenswrapper[4957]: I1128 22:28:03.650738 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:04 crc kubenswrapper[4957]: I1128 22:28:04.059961 4957 scope.go:117] "RemoveContainer" containerID="f3d207fd2a0714e5f74dbfb7621e02826e6efb394438b761ea4b45915f6781ce" Nov 28 22:28:04 crc kubenswrapper[4957]: I1128 22:28:04.060792 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/crc-debug-wtq6b" Nov 28 22:28:04 crc kubenswrapper[4957]: I1128 22:28:04.376092 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hfcnj"] Nov 28 22:28:04 crc kubenswrapper[4957]: I1128 22:28:04.825717 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77005c62-b51f-48be-8ef3-1a4b488b712d" path="/var/lib/kubelet/pods/77005c62-b51f-48be-8ef3-1a4b488b712d/volumes" Nov 28 22:28:05 crc kubenswrapper[4957]: I1128 22:28:05.072753 4957 generic.go:334] "Generic (PLEG): container finished" podID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerID="e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53" exitCode=0 Nov 28 22:28:05 crc kubenswrapper[4957]: I1128 22:28:05.072808 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerDied","Data":"e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53"} Nov 28 22:28:05 crc kubenswrapper[4957]: I1128 22:28:05.073107 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerStarted","Data":"3d88e9f79bd48e6d9b8f3fd7fd697fbde0cdc390b1b1a9ec1e24d8b11f85d144"} Nov 28 22:28:07 crc kubenswrapper[4957]: I1128 22:28:07.096554 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerStarted","Data":"4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7"} Nov 28 22:28:08 crc kubenswrapper[4957]: I1128 22:28:08.110572 4957 generic.go:334] "Generic (PLEG): container finished" podID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerID="4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7" exitCode=0 Nov 28 22:28:08 crc kubenswrapper[4957]: I1128 22:28:08.110663 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerDied","Data":"4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7"} Nov 28 22:28:09 crc kubenswrapper[4957]: I1128 22:28:09.142457 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerStarted","Data":"94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0"} Nov 28 22:28:09 crc kubenswrapper[4957]: I1128 22:28:09.178429 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hfcnj" podStartSLOduration=2.652475837 podStartE2EDuration="6.178408905s" podCreationTimestamp="2025-11-28 22:28:03 +0000 UTC" firstStartedPulling="2025-11-28 22:28:05.075247106 +0000 UTC m=+5924.543895035" lastFinishedPulling="2025-11-28 22:28:08.601180194 +0000 UTC m=+5928.069828103" observedRunningTime="2025-11-28 22:28:09.162561013 +0000 UTC m=+5928.631208932" watchObservedRunningTime="2025-11-28 22:28:09.178408905 +0000 UTC m=+5928.647056814" Nov 28 22:28:13 crc kubenswrapper[4957]: I1128 22:28:13.651284 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:13 crc kubenswrapper[4957]: I1128 22:28:13.651819 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:13 crc kubenswrapper[4957]: I1128 22:28:13.701057 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:14 crc kubenswrapper[4957]: I1128 22:28:14.242171 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:14 crc kubenswrapper[4957]: I1128 22:28:14.295158 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hfcnj"] Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.214408 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hfcnj" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="registry-server" containerID="cri-o://94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0" gracePeriod=2 Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.777746 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.926496 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p999l\" (UniqueName: \"kubernetes.io/projected/f7875799-8435-4b77-a6bb-1a5a430d1b95-kube-api-access-p999l\") pod \"f7875799-8435-4b77-a6bb-1a5a430d1b95\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.926909 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-utilities\") pod \"f7875799-8435-4b77-a6bb-1a5a430d1b95\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.927066 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-catalog-content\") pod \"f7875799-8435-4b77-a6bb-1a5a430d1b95\" (UID: \"f7875799-8435-4b77-a6bb-1a5a430d1b95\") " Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.927527 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-utilities" (OuterVolumeSpecName: "utilities") pod "f7875799-8435-4b77-a6bb-1a5a430d1b95" (UID: "f7875799-8435-4b77-a6bb-1a5a430d1b95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.928122 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.934466 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7875799-8435-4b77-a6bb-1a5a430d1b95-kube-api-access-p999l" (OuterVolumeSpecName: "kube-api-access-p999l") pod "f7875799-8435-4b77-a6bb-1a5a430d1b95" (UID: "f7875799-8435-4b77-a6bb-1a5a430d1b95"). InnerVolumeSpecName "kube-api-access-p999l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:28:16 crc kubenswrapper[4957]: I1128 22:28:16.988488 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7875799-8435-4b77-a6bb-1a5a430d1b95" (UID: "f7875799-8435-4b77-a6bb-1a5a430d1b95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.030572 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7875799-8435-4b77-a6bb-1a5a430d1b95-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.030609 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p999l\" (UniqueName: \"kubernetes.io/projected/f7875799-8435-4b77-a6bb-1a5a430d1b95-kube-api-access-p999l\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.225882 4957 generic.go:334] "Generic (PLEG): container finished" podID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerID="94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0" exitCode=0 Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.225961 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcnj" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.225981 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerDied","Data":"94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0"} Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.226613 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcnj" event={"ID":"f7875799-8435-4b77-a6bb-1a5a430d1b95","Type":"ContainerDied","Data":"3d88e9f79bd48e6d9b8f3fd7fd697fbde0cdc390b1b1a9ec1e24d8b11f85d144"} Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.226643 4957 scope.go:117] "RemoveContainer" containerID="94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.249610 4957 scope.go:117] "RemoveContainer" containerID="4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.281072 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hfcnj"] Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.292550 4957 scope.go:117] "RemoveContainer" containerID="e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.292645 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hfcnj"] Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.324490 4957 scope.go:117] "RemoveContainer" containerID="94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0" Nov 28 22:28:17 crc kubenswrapper[4957]: E1128 22:28:17.325335 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0\": container with ID starting with 94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0 not found: ID does not exist" containerID="94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.325371 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0"} err="failed to get container status \"94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0\": rpc error: code = NotFound desc = could not find container \"94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0\": container with ID starting with 94a7e581e8e6dc5f5cad94d23ab9ed90399296ba6f4924f1d974b9719d3b78d0 not found: ID does not exist" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.325412 4957 scope.go:117] "RemoveContainer" containerID="4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7" Nov 28 22:28:17 crc kubenswrapper[4957]: E1128 22:28:17.325762 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7\": container with ID starting with 4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7 not found: ID does not exist" containerID="4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.325788 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7"} err="failed to get container status \"4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7\": rpc error: code = NotFound desc = could not find container \"4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7\": container with ID starting with 4d3959c4c49902f8845de58186faa003489b923a1b0e8676bd68e48136a98ee7 not found: ID does not exist" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.325835 4957 scope.go:117] "RemoveContainer" containerID="e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53" Nov 28 22:28:17 crc kubenswrapper[4957]: E1128 22:28:17.326030 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53\": container with ID starting with e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53 not found: ID does not exist" containerID="e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53" Nov 28 22:28:17 crc kubenswrapper[4957]: I1128 22:28:17.326081 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53"} err="failed to get container status \"e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53\": rpc error: code = NotFound desc = could not find container \"e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53\": container with ID starting with e51db2b5ba8303a66ea47c2d7112731e2fba036b1a1206e0a0d19094b0832d53 not found: ID does not exist" Nov 28 22:28:18 crc kubenswrapper[4957]: I1128 22:28:18.834526 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" path="/var/lib/kubelet/pods/f7875799-8435-4b77-a6bb-1a5a430d1b95/volumes" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.484369 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2nmxn"] Nov 28 22:28:23 crc kubenswrapper[4957]: E1128 22:28:23.486546 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="registry-server" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.486568 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="registry-server" Nov 28 22:28:23 crc kubenswrapper[4957]: E1128 22:28:23.486591 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="extract-utilities" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.486600 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="extract-utilities" Nov 28 22:28:23 crc kubenswrapper[4957]: E1128 22:28:23.486651 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="extract-content" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.486660 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="extract-content" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.487011 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7875799-8435-4b77-a6bb-1a5a430d1b95" containerName="registry-server" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.489382 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.502446 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nmxn"] Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.683184 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qn7n\" (UniqueName: \"kubernetes.io/projected/d93a5292-2052-4ff9-af2c-cf56baa47398-kube-api-access-7qn7n\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.683443 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-catalog-content\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.683539 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-utilities\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.785949 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qn7n\" (UniqueName: \"kubernetes.io/projected/d93a5292-2052-4ff9-af2c-cf56baa47398-kube-api-access-7qn7n\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.787373 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-catalog-content\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.787816 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-catalog-content\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.787891 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-utilities\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.788264 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-utilities\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.810119 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qn7n\" (UniqueName: \"kubernetes.io/projected/d93a5292-2052-4ff9-af2c-cf56baa47398-kube-api-access-7qn7n\") pod \"community-operators-2nmxn\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:23 crc kubenswrapper[4957]: I1128 22:28:23.820295 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:24 crc kubenswrapper[4957]: I1128 22:28:24.499072 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nmxn"] Nov 28 22:28:25 crc kubenswrapper[4957]: I1128 22:28:25.317088 4957 generic.go:334] "Generic (PLEG): container finished" podID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerID="0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642" exitCode=0 Nov 28 22:28:25 crc kubenswrapper[4957]: I1128 22:28:25.317173 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerDied","Data":"0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642"} Nov 28 22:28:25 crc kubenswrapper[4957]: I1128 22:28:25.317441 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerStarted","Data":"8da61d853bd853da288db85c22ab24576fcb48eb73fd7f0c193d4233c40c8377"} Nov 28 22:28:26 crc kubenswrapper[4957]: I1128 22:28:26.954961 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-api/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.144375 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-notifier/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.149524 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-evaluator/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.167995 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-listener/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.340997 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerStarted","Data":"7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8"} Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.364491 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68cfc84946-tg8qn_68ad8aac-7884-41b1-8f60-68a111f04c11/barbican-api/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.412105 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68cfc84946-tg8qn_68ad8aac-7884-41b1-8f60-68a111f04c11/barbican-api-log/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.488283 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d9bff56f6-ghr8m_cba969ad-04d0-4a30-946d-995723ab4041/barbican-keystone-listener/0.log" Nov 28 22:28:27 crc kubenswrapper[4957]: I1128 22:28:27.683360 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d9bff56f6-ghr8m_cba969ad-04d0-4a30-946d-995723ab4041/barbican-keystone-listener-log/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.161240 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-768b76c799-g58ls_d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc/barbican-worker/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.274686 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-768b76c799-g58ls_d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc/barbican-worker-log/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.384455 4957 generic.go:334] "Generic (PLEG): container finished" podID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerID="7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8" exitCode=0 Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.384535 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerDied","Data":"7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8"} Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.590614 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l_9d722b72-77bc-4500-89f5-a13bfa49eba1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.730426 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/ceilometer-central-agent/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.822982 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/ceilometer-notification-agent/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.860664 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/proxy-httpd/0.log" Nov 28 22:28:28 crc kubenswrapper[4957]: I1128 22:28:28.948147 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/sg-core/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.060841 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7b9e0f02-f5be-405c-8318-834ef79136be/cinder-api-log/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.189065 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7b9e0f02-f5be-405c-8318-834ef79136be/cinder-api/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.348708 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b490ddc2-ebb5-4cba-abea-a76c7e7a5172/cinder-scheduler/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.395108 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerStarted","Data":"8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869"} Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.419043 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2nmxn" podStartSLOduration=3.50827001 podStartE2EDuration="6.41902473s" podCreationTimestamp="2025-11-28 22:28:23 +0000 UTC" firstStartedPulling="2025-11-28 22:28:25.319971413 +0000 UTC m=+5944.788619332" lastFinishedPulling="2025-11-28 22:28:28.230726143 +0000 UTC m=+5947.699374052" observedRunningTime="2025-11-28 22:28:29.415136304 +0000 UTC m=+5948.883784213" watchObservedRunningTime="2025-11-28 22:28:29.41902473 +0000 UTC m=+5948.887672639" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.452437 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b490ddc2-ebb5-4cba-abea-a76c7e7a5172/probe/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.699464 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-gbxml_853f5e84-3f80-4dd1-99cb-4fb5006f2bf5/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.754945 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9_34a11afa-d26a-4036-8d4e-6dcd96bc3036/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:29 crc kubenswrapper[4957]: I1128 22:28:29.919841 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-g7j2p_228bc1b2-f53c-47ca-9063-2630d3331c8b/init/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.287908 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-g7j2p_228bc1b2-f53c-47ca-9063-2630d3331c8b/init/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.304896 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-g7j2p_228bc1b2-f53c-47ca-9063-2630d3331c8b/dnsmasq-dns/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.319036 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6_e706336e-29eb-4b07-b29c-bb080c8026be/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.558043 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29dab28a-afdf-4c02-a83a-f43c408b24ee/glance-log/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.608084 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29dab28a-afdf-4c02-a83a-f43c408b24ee/glance-httpd/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.740412 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_34d417c8-e8c3-491b-84e9-0db9f9a10038/glance-log/0.log" Nov 28 22:28:30 crc kubenswrapper[4957]: I1128 22:28:30.765583 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_34d417c8-e8c3-491b-84e9-0db9f9a10038/glance-httpd/0.log" Nov 28 22:28:31 crc kubenswrapper[4957]: I1128 22:28:31.120125 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-8f7b695b5-9dcxn_eecb8bf2-f385-4670-a84c-611a1f373c8f/heat-engine/0.log" Nov 28 22:28:31 crc kubenswrapper[4957]: I1128 22:28:31.312760 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd_644a4348-cc60-4801-a899-27ba6238dcd1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:31 crc kubenswrapper[4957]: I1128 22:28:31.397615 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4xnhm_52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:31 crc kubenswrapper[4957]: I1128 22:28:31.425322 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-58bdf58698-25xts_25301b86-e61f-4a9e-90e2-b1f1e9c045dc/heat-api/0.log" Nov 28 22:28:31 crc kubenswrapper[4957]: I1128 22:28:31.577726 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-768b697649-7gz8m_8fc58381-64db-46f6-9e97-93e8e4c45abe/heat-cfnapi/0.log" Nov 28 22:28:31 crc kubenswrapper[4957]: I1128 22:28:31.954269 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406121-b8vw6_010a05b8-93e0-4601-9d1d-f865737b9230/keystone-cron/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.026879 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7859c96b89-s4dx8_1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6/keystone-api/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.075185 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7d861b88-8080-411b-8c34-ae277a73b580/kube-state-metrics/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.170553 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l_a745c0d3-586f-4841-a3e4-08c009c85f9b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.271435 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-js78h_80b23341-10eb-4c68-aba7-e36583140466/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.503691 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_448e0773-f22b-417a-a4b4-3434881c628f/mysqld-exporter/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.770513 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j_c14be992-6888-4d40-a63f-8ba6cbc0c837/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.806023 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59d88444bf-br9dz_f83754ad-9910-4042-9995-ca4dec9d9a29/neutron-httpd/0.log" Nov 28 22:28:32 crc kubenswrapper[4957]: I1128 22:28:32.893522 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59d88444bf-br9dz_f83754ad-9910-4042-9995-ca4dec9d9a29/neutron-api/0.log" Nov 28 22:28:33 crc kubenswrapper[4957]: I1128 22:28:33.419555 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_22f8d9b1-ab89-42ad-8872-320873c45110/nova-cell0-conductor-conductor/0.log" Nov 28 22:28:33 crc kubenswrapper[4957]: I1128 22:28:33.671168 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1fd99b25-e3b2-439d-874c-6ae3351f9cea/nova-api-log/0.log" Nov 28 22:28:33 crc kubenswrapper[4957]: I1128 22:28:33.745767 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_33612176-0997-4d00-a797-b5997f0d00c3/nova-cell1-conductor-conductor/0.log" Nov 28 22:28:33 crc kubenswrapper[4957]: I1128 22:28:33.820802 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:33 crc kubenswrapper[4957]: I1128 22:28:33.821547 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:33 crc kubenswrapper[4957]: I1128 22:28:33.874551 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.047439 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6bed6daf-51f2-46cc-9512-a24925686b61/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.064321 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rdwmr_d17490c8-1d2b-43d6-aefe-bcbc181d72aa/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.096187 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1fd99b25-e3b2-439d-874c-6ae3351f9cea/nova-api-api/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.338658 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_27892901-c588-481e-8b3c-363e2128f7d3/nova-metadata-log/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.517170 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.682902 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0f526d90-5313-4d45-a9d1-760dbf18440d/nova-scheduler-scheduler/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.723547 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9b91aacb-b300-41de-814e-26e73ac93c2e/mysql-bootstrap/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.868016 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9b91aacb-b300-41de-814e-26e73ac93c2e/mysql-bootstrap/0.log" Nov 28 22:28:34 crc kubenswrapper[4957]: I1128 22:28:34.962732 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9b91aacb-b300-41de-814e-26e73ac93c2e/galera/0.log" Nov 28 22:28:35 crc kubenswrapper[4957]: I1128 22:28:35.082576 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d97270c0-f75e-4695-87b5-2c7cfd08bf02/mysql-bootstrap/0.log" Nov 28 22:28:35 crc kubenswrapper[4957]: I1128 22:28:35.531189 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d97270c0-f75e-4695-87b5-2c7cfd08bf02/mysql-bootstrap/0.log" Nov 28 22:28:35 crc kubenswrapper[4957]: I1128 22:28:35.532521 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d97270c0-f75e-4695-87b5-2c7cfd08bf02/galera/0.log" Nov 28 22:28:35 crc kubenswrapper[4957]: I1128 22:28:35.744650 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687/openstackclient/0.log" Nov 28 22:28:35 crc kubenswrapper[4957]: I1128 22:28:35.847430 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dzt8d_6bc9960f-fdff-42fa-8cdd-4ec0d88f359d/ovn-controller/0.log" Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.041202 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2d7lb_a91a39cf-2bad-48a1-9dc7-2309bc652725/openstack-network-exporter/0.log" Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.221580 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nmxn"] Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.224287 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovsdb-server-init/0.log" Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.407203 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovsdb-server-init/0.log" Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.436695 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovsdb-server/0.log" Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.447987 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovs-vswitchd/0.log" Nov 28 22:28:36 crc kubenswrapper[4957]: I1128 22:28:36.504606 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_27892901-c588-481e-8b3c-363e2128f7d3/nova-metadata-metadata/0.log" Nov 28 22:28:37 crc kubenswrapper[4957]: I1128 22:28:37.527726 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_69f9b12c-31d7-4df2-a4ec-5861c3ad3d76/openstack-network-exporter/0.log" Nov 28 22:28:37 crc kubenswrapper[4957]: I1128 22:28:37.548043 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5k6gg_9938b0a7-21ab-4bb0-b689-6004bce90534/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:37 crc kubenswrapper[4957]: I1128 22:28:37.561113 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2nmxn" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="registry-server" containerID="cri-o://8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869" gracePeriod=2 Nov 28 22:28:37 crc kubenswrapper[4957]: I1128 22:28:37.793647 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_69f9b12c-31d7-4df2-a4ec-5861c3ad3d76/ovn-northd/0.log" Nov 28 22:28:37 crc kubenswrapper[4957]: I1128 22:28:37.948383 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4f513a2d-d752-44ee-b02c-e7f3dcb3945d/openstack-network-exporter/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.034265 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4f513a2d-d752-44ee-b02c-e7f3dcb3945d/ovsdbserver-nb/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.144368 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.146106 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qn7n\" (UniqueName: \"kubernetes.io/projected/d93a5292-2052-4ff9-af2c-cf56baa47398-kube-api-access-7qn7n\") pod \"d93a5292-2052-4ff9-af2c-cf56baa47398\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.146301 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-catalog-content\") pod \"d93a5292-2052-4ff9-af2c-cf56baa47398\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.146750 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-utilities\") pod \"d93a5292-2052-4ff9-af2c-cf56baa47398\" (UID: \"d93a5292-2052-4ff9-af2c-cf56baa47398\") " Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.147482 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-utilities" (OuterVolumeSpecName: "utilities") pod "d93a5292-2052-4ff9-af2c-cf56baa47398" (UID: "d93a5292-2052-4ff9-af2c-cf56baa47398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.148164 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.160871 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93a5292-2052-4ff9-af2c-cf56baa47398-kube-api-access-7qn7n" (OuterVolumeSpecName: "kube-api-access-7qn7n") pod "d93a5292-2052-4ff9-af2c-cf56baa47398" (UID: "d93a5292-2052-4ff9-af2c-cf56baa47398"). InnerVolumeSpecName "kube-api-access-7qn7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.168752 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d137d00-b823-4d67-a158-71e84c6d2c6b/openstack-network-exporter/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.199309 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d137d00-b823-4d67-a158-71e84c6d2c6b/ovsdbserver-sb/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.231182 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d93a5292-2052-4ff9-af2c-cf56baa47398" (UID: "d93a5292-2052-4ff9-af2c-cf56baa47398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.249069 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qn7n\" (UniqueName: \"kubernetes.io/projected/d93a5292-2052-4ff9-af2c-cf56baa47398-kube-api-access-7qn7n\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.249099 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d93a5292-2052-4ff9-af2c-cf56baa47398-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.517003 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-956cd8448-hm6cs_271d8bc3-c837-4768-b8de-6b185bfa2659/placement-api/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.536488 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/init-config-reloader/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.574841 4957 generic.go:334] "Generic (PLEG): container finished" podID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerID="8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869" exitCode=0 Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.574971 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nmxn" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.575160 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerDied","Data":"8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869"} Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.575311 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nmxn" event={"ID":"d93a5292-2052-4ff9-af2c-cf56baa47398","Type":"ContainerDied","Data":"8da61d853bd853da288db85c22ab24576fcb48eb73fd7f0c193d4233c40c8377"} Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.575339 4957 scope.go:117] "RemoveContainer" containerID="8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.605032 4957 scope.go:117] "RemoveContainer" containerID="7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.611948 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nmxn"] Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.622863 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2nmxn"] Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.637063 4957 scope.go:117] "RemoveContainer" containerID="0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.647106 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-956cd8448-hm6cs_271d8bc3-c837-4768-b8de-6b185bfa2659/placement-log/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.685078 4957 scope.go:117] "RemoveContainer" containerID="8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869" Nov 28 22:28:38 crc kubenswrapper[4957]: E1128 22:28:38.685639 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869\": container with ID starting with 8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869 not found: ID does not exist" containerID="8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.685672 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869"} err="failed to get container status \"8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869\": rpc error: code = NotFound desc = could not find container \"8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869\": container with ID starting with 8408a00e15104137c05dd1269cb3226023105dd6cb94db6fbbc833f078102869 not found: ID does not exist" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.685691 4957 scope.go:117] "RemoveContainer" containerID="7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8" Nov 28 22:28:38 crc kubenswrapper[4957]: E1128 22:28:38.686087 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8\": container with ID starting with 7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8 not found: ID does not exist" containerID="7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.686111 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8"} err="failed to get container status \"7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8\": rpc error: code = NotFound desc = could not find container \"7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8\": container with ID starting with 7c6beaa367174cb7bf10234a9920bce026a18df5145e00c2c205c34d7bbc85c8 not found: ID does not exist" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.686142 4957 scope.go:117] "RemoveContainer" containerID="0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642" Nov 28 22:28:38 crc kubenswrapper[4957]: E1128 22:28:38.686480 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642\": container with ID starting with 0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642 not found: ID does not exist" containerID="0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.686523 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642"} err="failed to get container status \"0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642\": rpc error: code = NotFound desc = could not find container \"0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642\": container with ID starting with 0424137c1a8f1e333aa88fe4ad91227f61adab1db16c15b83d348229a7f75642 not found: ID does not exist" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.751818 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/config-reloader/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.802546 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/init-config-reloader/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.833772 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" path="/var/lib/kubelet/pods/d93a5292-2052-4ff9-af2c-cf56baa47398/volumes" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.857893 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/prometheus/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.895722 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/thanos-sidecar/0.log" Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.994159 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:28:38 crc kubenswrapper[4957]: I1128 22:28:38.994238 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.010041 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b6a2345-f928-41e0-bb0d-efd6ca576e42/setup-container/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.307614 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b6a2345-f928-41e0-bb0d-efd6ca576e42/rabbitmq/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.341394 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_39bd199d-d600-4b4a-9d31-831e346ea98d/setup-container/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.346955 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b6a2345-f928-41e0-bb0d-efd6ca576e42/setup-container/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.554368 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_39bd199d-d600-4b4a-9d31-831e346ea98d/setup-container/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.645267 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9_be3139dd-9ebc-4678-abba-2217f17f76c1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.651738 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_39bd199d-d600-4b4a-9d31-831e346ea98d/rabbitmq/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.859626 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d6v7s_38c81f98-baf0-45aa-a33e-566697f7673c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:39 crc kubenswrapper[4957]: I1128 22:28:39.899905 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp_1499e3ce-e9eb-4774-9f22-fbac5300742b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:40 crc kubenswrapper[4957]: I1128 22:28:40.108908 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lq4r6_517e3d64-b818-4eea-a010-1237b735c5e2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:40 crc kubenswrapper[4957]: I1128 22:28:40.110075 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-s4mhd_e92f8c7f-4fd3-4ece-963f-3e904a5057bf/ssh-known-hosts-edpm-deployment/0.log" Nov 28 22:28:40 crc kubenswrapper[4957]: I1128 22:28:40.325740 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6598dd477f-t4jws_d3417d80-e650-4833-b935-a4cbacf23212/proxy-server/0.log" Nov 28 22:28:40 crc kubenswrapper[4957]: I1128 22:28:40.592459 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6598dd477f-t4jws_d3417d80-e650-4833-b935-a4cbacf23212/proxy-httpd/0.log" Nov 28 22:28:40 crc kubenswrapper[4957]: I1128 22:28:40.835024 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bzbx8_93cfcc7a-cedc-4adc-abb3-eef0aec22ae7/swift-ring-rebalance/0.log" Nov 28 22:28:40 crc kubenswrapper[4957]: I1128 22:28:40.915807 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-auditor/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.030119 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-reaper/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.131354 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-auditor/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.160456 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-replicator/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.192462 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-server/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.274837 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-replicator/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.398533 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-updater/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.423943 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-server/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.457352 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-auditor/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.479035 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-expirer/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.628168 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-replicator/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.642917 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-server/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.670522 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-updater/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.704487 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/rsync/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.909671 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/swift-recon-cron/0.log" Nov 28 22:28:41 crc kubenswrapper[4957]: I1128 22:28:41.931290 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7_dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:42 crc kubenswrapper[4957]: I1128 22:28:42.166702 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd_52615b47-f32d-4e3a-a0a0-dc23c7bc7677/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:42 crc kubenswrapper[4957]: I1128 22:28:42.342430 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb/test-operator-logs-container/0.log" Nov 28 22:28:42 crc kubenswrapper[4957]: I1128 22:28:42.474188 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh_c9b89b14-7e55-48b4-bbd9-5c67ed879847/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:28:43 crc kubenswrapper[4957]: I1128 22:28:43.014697 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d0047755-5ddc-48c8-a4eb-4bf540cb695f/tempest-tests-tempest-tests-runner/0.log" Nov 28 22:28:46 crc kubenswrapper[4957]: I1128 22:28:46.069019 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_179be6ed-b240-4fde-995c-92c72dbd2b02/memcached/0.log" Nov 28 22:29:07 crc kubenswrapper[4957]: I1128 22:29:07.751905 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/util/0.log" Nov 28 22:29:07 crc kubenswrapper[4957]: I1128 22:29:07.995541 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/pull/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.024326 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/util/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.040105 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/pull/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.238991 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/util/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.242314 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/extract/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.250484 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/pull/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.425488 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-sfgm2_12484928-2fe4-4bd6-bac2-e0f2e48829fe/kube-rbac-proxy/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.497249 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-sfgm2_12484928-2fe4-4bd6-bac2-e0f2e48829fe/manager/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.499899 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-s85m7_1a5138b3-6b84-43b0-bdc9-f867a83f4bc7/kube-rbac-proxy/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.642772 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-s85m7_1a5138b3-6b84-43b0-bdc9-f867a83f4bc7/manager/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.680314 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kbhl9_442226e4-b2b8-41c8-9278-2845b2fff0aa/kube-rbac-proxy/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.710787 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kbhl9_442226e4-b2b8-41c8-9278-2845b2fff0aa/manager/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.914704 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-8t4fj_c330a33e-ec13-4ec0-869b-4847b9385d5d/kube-rbac-proxy/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.943678 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-8t4fj_c330a33e-ec13-4ec0-869b-4847b9385d5d/manager/0.log" Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.993599 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:29:08 crc kubenswrapper[4957]: I1128 22:29:08.993652 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.085982 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v6427_d50c67da-27ca-4ab9-bf83-b2275ff3d801/kube-rbac-proxy/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.125570 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xqjj5_8eac7f46-0beb-4f3f-a530-2fed527b6383/kube-rbac-proxy/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.180632 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v6427_d50c67da-27ca-4ab9-bf83-b2275ff3d801/manager/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.311822 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-ccmt8_96a751a3-4af7-4cb8-b12b-46e0d177b6f3/kube-rbac-proxy/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.312687 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xqjj5_8eac7f46-0beb-4f3f-a530-2fed527b6383/manager/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.508862 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-8wqx7_c2cca951-4ada-44ec-ab43-a1f69ee7f7cb/kube-rbac-proxy/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.573356 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-8wqx7_c2cca951-4ada-44ec-ab43-a1f69ee7f7cb/manager/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.589663 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-ccmt8_96a751a3-4af7-4cb8-b12b-46e0d177b6f3/manager/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.731401 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-47tjl_a8962e83-cc90-4844-9bca-96e85cf789bd/kube-rbac-proxy/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.834625 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-47tjl_a8962e83-cc90-4844-9bca-96e85cf789bd/manager/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.891137 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-2n5cx_34faaa98-3568-4478-b968-b9cbe87c77f3/kube-rbac-proxy/0.log" Nov 28 22:29:09 crc kubenswrapper[4957]: I1128 22:29:09.926335 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-2n5cx_34faaa98-3568-4478-b968-b9cbe87c77f3/manager/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.012672 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-d6twj_f510519a-6187-47f8-875e-3e9a5537c364/kube-rbac-proxy/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.117107 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-d6twj_f510519a-6187-47f8-875e-3e9a5537c364/manager/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.210728 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ln4j9_47f33b35-a8d3-4981-8001-47b906a33fa6/kube-rbac-proxy/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.297529 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ln4j9_47f33b35-a8d3-4981-8001-47b906a33fa6/manager/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.373850 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-bn4dd_c59777ed-7790-45bc-972a-f9fbe8fbccf4/kube-rbac-proxy/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.566703 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-bn4dd_c59777ed-7790-45bc-972a-f9fbe8fbccf4/manager/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.585130 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-npt5l_844d1842-4247-4b95-8cca-1785d3ed80b8/kube-rbac-proxy/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.649604 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-npt5l_844d1842-4247-4b95-8cca-1785d3ed80b8/manager/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.745619 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v_15b01ca6-83c4-47da-bd82-8b5c4a177561/kube-rbac-proxy/0.log" Nov 28 22:29:10 crc kubenswrapper[4957]: I1128 22:29:10.788290 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v_15b01ca6-83c4-47da-bd82-8b5c4a177561/manager/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.125461 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-567f7c7dd7-9wckn_917e81d1-a7a3-431f-9b6f-511334a57f50/operator/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.223369 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gnq58_f76be402-2871-4e82-8c2a-8cc359b8c889/registry-server/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.305897 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-cnzv4_499b2d8c-a27a-46f1-9f38-8b29ab905da7/kube-rbac-proxy/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.412293 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-cnzv4_499b2d8c-a27a-46f1-9f38-8b29ab905da7/manager/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.563534 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-2w9h7_02e155d2-76c6-4fca-b013-6c2dcf607cdb/kube-rbac-proxy/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.570458 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-2w9h7_02e155d2-76c6-4fca-b013-6c2dcf607cdb/manager/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.883074 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-cfdjt_7a4dc310-e5f8-4a6f-8c8b-94a7faca596d/operator/0.log" Nov 28 22:29:11 crc kubenswrapper[4957]: I1128 22:29:11.949332 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-nq5h8_b8066278-4583-4fe3-aed6-93543482ab1e/kube-rbac-proxy/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.125003 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-nq5h8_b8066278-4583-4fe3-aed6-93543482ab1e/manager/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.179085 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f6754bd54-dbj68_a3a9a0f3-6f26-4174-973d-049a1b8a2573/kube-rbac-proxy/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.353108 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-v56f9_554f334d-cef4-48f9-bb57-03261844fbde/kube-rbac-proxy/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.383497 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fb8944fcb-x9n55_aaaab82e-6456-4b20-9d92-f19458df9948/manager/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.469233 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-v56f9_554f334d-cef4-48f9-bb57-03261844fbde/manager/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.527590 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f6754bd54-dbj68_a3a9a0f3-6f26-4174-973d-049a1b8a2573/manager/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.597724 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qshzq_3d6f1d41-eaa5-4258-906c-5894ac698e5b/kube-rbac-proxy/0.log" Nov 28 22:29:12 crc kubenswrapper[4957]: I1128 22:29:12.611573 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qshzq_3d6f1d41-eaa5-4258-906c-5894ac698e5b/manager/0.log" Nov 28 22:29:30 crc kubenswrapper[4957]: I1128 22:29:30.349716 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pj8vl_27a7baa1-a66c-4c13-be52-2a401578c92d/control-plane-machine-set-operator/0.log" Nov 28 22:29:30 crc kubenswrapper[4957]: I1128 22:29:30.445694 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f229v_b36a4b12-b069-4dc4-a503-936aae20d06e/kube-rbac-proxy/0.log" Nov 28 22:29:30 crc kubenswrapper[4957]: I1128 22:29:30.509279 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f229v_b36a4b12-b069-4dc4-a503-936aae20d06e/machine-api-operator/0.log" Nov 28 22:29:38 crc kubenswrapper[4957]: I1128 22:29:38.992838 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:29:38 crc kubenswrapper[4957]: I1128 22:29:38.993442 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:29:38 crc kubenswrapper[4957]: I1128 22:29:38.993490 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:29:38 crc kubenswrapper[4957]: I1128 22:29:38.994690 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:29:38 crc kubenswrapper[4957]: I1128 22:29:38.994775 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" gracePeriod=600 Nov 28 22:29:39 crc kubenswrapper[4957]: E1128 22:29:39.116472 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:29:39 crc kubenswrapper[4957]: I1128 22:29:39.217455 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" exitCode=0 Nov 28 22:29:39 crc kubenswrapper[4957]: I1128 22:29:39.217502 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228"} Nov 28 22:29:39 crc kubenswrapper[4957]: I1128 22:29:39.217542 4957 scope.go:117] "RemoveContainer" containerID="dcd3913b7c5339e971d93a9d0a6c9bbba13a49ba25d26a2d0a0084811e529b74" Nov 28 22:29:39 crc kubenswrapper[4957]: I1128 22:29:39.218402 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:29:39 crc kubenswrapper[4957]: E1128 22:29:39.218840 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:29:43 crc kubenswrapper[4957]: I1128 22:29:43.058001 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-88wlc_e13a6b33-f471-46db-b7f2-98600799eaef/cert-manager-controller/0.log" Nov 28 22:29:43 crc kubenswrapper[4957]: I1128 22:29:43.151808 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8kfk7_228af258-a007-4de0-922b-f434bc1e665b/cert-manager-cainjector/0.log" Nov 28 22:29:43 crc kubenswrapper[4957]: I1128 22:29:43.238017 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-gjw6l_bd388a87-03fe-4f7f-b36a-a89a8d110806/cert-manager-webhook/0.log" Nov 28 22:29:52 crc kubenswrapper[4957]: I1128 22:29:52.815929 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:29:52 crc kubenswrapper[4957]: E1128 22:29:52.816770 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:29:55 crc kubenswrapper[4957]: I1128 22:29:55.371568 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-8fhk9_c781e919-f546-4bd7-b564-cd424540268c/nmstate-console-plugin/0.log" Nov 28 22:29:55 crc kubenswrapper[4957]: I1128 22:29:55.500172 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4fhg7_f9d6e935-f7a3-4a37-8d21-5bb73ef04186/nmstate-handler/0.log" Nov 28 22:29:55 crc kubenswrapper[4957]: I1128 22:29:55.549731 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-8gmh8_b878fdac-b49b-40d8-b0cb-af5d44f21f9d/kube-rbac-proxy/0.log" Nov 28 22:29:55 crc kubenswrapper[4957]: I1128 22:29:55.564278 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-8gmh8_b878fdac-b49b-40d8-b0cb-af5d44f21f9d/nmstate-metrics/0.log" Nov 28 22:29:55 crc kubenswrapper[4957]: I1128 22:29:55.730666 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-fzfdw_66d17f19-66f9-41c9-9566-fca688da8506/nmstate-operator/0.log" Nov 28 22:29:55 crc kubenswrapper[4957]: I1128 22:29:55.767000 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-t9nrr_9234d52f-6818-4ccf-ac79-4d5e4f3cce21/nmstate-webhook/0.log" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.157951 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt"] Nov 28 22:30:00 crc kubenswrapper[4957]: E1128 22:30:00.159243 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="registry-server" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.159264 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="registry-server" Nov 28 22:30:00 crc kubenswrapper[4957]: E1128 22:30:00.159300 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="extract-content" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.159309 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="extract-content" Nov 28 22:30:00 crc kubenswrapper[4957]: E1128 22:30:00.159340 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="extract-utilities" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.159354 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="extract-utilities" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.159652 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93a5292-2052-4ff9-af2c-cf56baa47398" containerName="registry-server" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.161154 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.165796 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.166697 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.191822 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt"] Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.261747 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/762ef65c-4618-4b3b-a129-d897dbde90de-kube-api-access-bnl6x\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.262435 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/762ef65c-4618-4b3b-a129-d897dbde90de-secret-volume\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.262602 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762ef65c-4618-4b3b-a129-d897dbde90de-config-volume\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.364585 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/762ef65c-4618-4b3b-a129-d897dbde90de-secret-volume\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.364647 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762ef65c-4618-4b3b-a129-d897dbde90de-config-volume\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.364704 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/762ef65c-4618-4b3b-a129-d897dbde90de-kube-api-access-bnl6x\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.365909 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762ef65c-4618-4b3b-a129-d897dbde90de-config-volume\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.372629 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/762ef65c-4618-4b3b-a129-d897dbde90de-secret-volume\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.384335 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/762ef65c-4618-4b3b-a129-d897dbde90de-kube-api-access-bnl6x\") pod \"collect-profiles-29406150-nhlkt\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.495267 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:00 crc kubenswrapper[4957]: I1128 22:30:00.978564 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt"] Nov 28 22:30:01 crc kubenswrapper[4957]: I1128 22:30:01.457483 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" event={"ID":"762ef65c-4618-4b3b-a129-d897dbde90de","Type":"ContainerStarted","Data":"495e4db44d830c26a1b68fd4ad4e70ce9481b9271df76ed2f22ecaac0c391797"} Nov 28 22:30:01 crc kubenswrapper[4957]: I1128 22:30:01.457536 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" event={"ID":"762ef65c-4618-4b3b-a129-d897dbde90de","Type":"ContainerStarted","Data":"dc554f49f12aa9626da95cfea10783b286720cf630ca1cedc0b526a779ac00e4"} Nov 28 22:30:01 crc kubenswrapper[4957]: I1128 22:30:01.489088 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" podStartSLOduration=1.4890656660000001 podStartE2EDuration="1.489065666s" podCreationTimestamp="2025-11-28 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 22:30:01.482280859 +0000 UTC m=+6040.950928768" watchObservedRunningTime="2025-11-28 22:30:01.489065666 +0000 UTC m=+6040.957713575" Nov 28 22:30:02 crc kubenswrapper[4957]: I1128 22:30:02.515526 4957 generic.go:334] "Generic (PLEG): container finished" podID="762ef65c-4618-4b3b-a129-d897dbde90de" containerID="495e4db44d830c26a1b68fd4ad4e70ce9481b9271df76ed2f22ecaac0c391797" exitCode=0 Nov 28 22:30:02 crc kubenswrapper[4957]: I1128 22:30:02.515875 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" event={"ID":"762ef65c-4618-4b3b-a129-d897dbde90de","Type":"ContainerDied","Data":"495e4db44d830c26a1b68fd4ad4e70ce9481b9271df76ed2f22ecaac0c391797"} Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.912428 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.951404 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762ef65c-4618-4b3b-a129-d897dbde90de-config-volume\") pod \"762ef65c-4618-4b3b-a129-d897dbde90de\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.951952 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/762ef65c-4618-4b3b-a129-d897dbde90de-secret-volume\") pod \"762ef65c-4618-4b3b-a129-d897dbde90de\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.952128 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/762ef65c-4618-4b3b-a129-d897dbde90de-kube-api-access-bnl6x\") pod \"762ef65c-4618-4b3b-a129-d897dbde90de\" (UID: \"762ef65c-4618-4b3b-a129-d897dbde90de\") " Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.952604 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/762ef65c-4618-4b3b-a129-d897dbde90de-config-volume" (OuterVolumeSpecName: "config-volume") pod "762ef65c-4618-4b3b-a129-d897dbde90de" (UID: "762ef65c-4618-4b3b-a129-d897dbde90de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.953534 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762ef65c-4618-4b3b-a129-d897dbde90de-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.961074 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762ef65c-4618-4b3b-a129-d897dbde90de-kube-api-access-bnl6x" (OuterVolumeSpecName: "kube-api-access-bnl6x") pod "762ef65c-4618-4b3b-a129-d897dbde90de" (UID: "762ef65c-4618-4b3b-a129-d897dbde90de"). InnerVolumeSpecName "kube-api-access-bnl6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:30:03 crc kubenswrapper[4957]: I1128 22:30:03.971741 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762ef65c-4618-4b3b-a129-d897dbde90de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "762ef65c-4618-4b3b-a129-d897dbde90de" (UID: "762ef65c-4618-4b3b-a129-d897dbde90de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.056231 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/762ef65c-4618-4b3b-a129-d897dbde90de-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.056265 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnl6x\" (UniqueName: \"kubernetes.io/projected/762ef65c-4618-4b3b-a129-d897dbde90de-kube-api-access-bnl6x\") on node \"crc\" DevicePath \"\"" Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.540033 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" event={"ID":"762ef65c-4618-4b3b-a129-d897dbde90de","Type":"ContainerDied","Data":"dc554f49f12aa9626da95cfea10783b286720cf630ca1cedc0b526a779ac00e4"} Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.540342 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc554f49f12aa9626da95cfea10783b286720cf630ca1cedc0b526a779ac00e4" Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.540119 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406150-nhlkt" Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.573509 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95"] Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.585544 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406105-jdh95"] Nov 28 22:30:04 crc kubenswrapper[4957]: I1128 22:30:04.827475 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa2ffd8-a33a-444e-b95b-55fd7bbf349b" path="/var/lib/kubelet/pods/1aa2ffd8-a33a-444e-b95b-55fd7bbf349b/volumes" Nov 28 22:30:07 crc kubenswrapper[4957]: I1128 22:30:07.813366 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:30:07 crc kubenswrapper[4957]: E1128 22:30:07.814335 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:30:08 crc kubenswrapper[4957]: I1128 22:30:08.390147 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/kube-rbac-proxy/0.log" Nov 28 22:30:08 crc kubenswrapper[4957]: I1128 22:30:08.412750 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/manager/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.081173 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-bw9vg_65a613a8-4720-4ef2-be4d-dceeee3ce44e/cluster-logging-operator/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.256459 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-zvxsk_827dd4f4-1c01-43ba-b1d8-d5c774a45d46/collector/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.323085 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1/loki-compactor/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.456706 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-4r6v6_67552d33-b77e-41cc-8233-16009aa347ca/loki-distributor/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.512746 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-gtfb8_0ae348b1-f460-4971-bd3f-6832a96d1f70/gateway/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.544234 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-gtfb8_0ae348b1-f460-4971-bd3f-6832a96d1f70/opa/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.667505 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-p9fzr_e2dbe144-6748-4562-888e-ac850bb6c0b4/gateway/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.699198 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-p9fzr_e2dbe144-6748-4562-888e-ac850bb6c0b4/opa/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.812948 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:30:22 crc kubenswrapper[4957]: E1128 22:30:22.813348 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.840469 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_cbbe211e-0fa5-42f6-830e-4feb479b2b58/loki-index-gateway/0.log" Nov 28 22:30:22 crc kubenswrapper[4957]: I1128 22:30:22.913120 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_3866c99e-7b87-4a00-9df5-b121467d603e/loki-ingester/0.log" Nov 28 22:30:23 crc kubenswrapper[4957]: I1128 22:30:23.054890 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-2cmml_c90dbf3d-fe84-45cc-baca-5cbc545bbb53/loki-querier/0.log" Nov 28 22:30:23 crc kubenswrapper[4957]: I1128 22:30:23.107113 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-6rjgr_ba532acb-af97-43b9-b61b-e54721951c1a/loki-query-frontend/0.log" Nov 28 22:30:31 crc kubenswrapper[4957]: I1128 22:30:31.430949 4957 scope.go:117] "RemoveContainer" containerID="31f10edb7ce4c58574b8a0ee0b4ef3262431d919f85d3691cc24b76f15900135" Nov 28 22:30:35 crc kubenswrapper[4957]: I1128 22:30:35.813146 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:30:35 crc kubenswrapper[4957]: E1128 22:30:35.814071 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:30:36 crc kubenswrapper[4957]: I1128 22:30:36.969736 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-mtzhn_505298d8-01d1-4918-8329-04c935f6a8a0/kube-rbac-proxy/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.163273 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-mtzhn_505298d8-01d1-4918-8329-04c935f6a8a0/controller/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.203356 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.408718 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.414228 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.451788 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.472272 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.678879 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.681173 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.697687 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.706575 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.880445 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.932563 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.937773 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:30:37 crc kubenswrapper[4957]: I1128 22:30:37.959410 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/controller/0.log" Nov 28 22:30:38 crc kubenswrapper[4957]: I1128 22:30:38.140231 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/frr-metrics/0.log" Nov 28 22:30:38 crc kubenswrapper[4957]: I1128 22:30:38.188755 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/kube-rbac-proxy/0.log" Nov 28 22:30:38 crc kubenswrapper[4957]: I1128 22:30:38.203571 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/kube-rbac-proxy-frr/0.log" Nov 28 22:30:38 crc kubenswrapper[4957]: I1128 22:30:38.400438 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/reloader/0.log" Nov 28 22:30:38 crc kubenswrapper[4957]: I1128 22:30:38.439054 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-zqgbd_58bfcebd-2036-46ce-8b59-d47e2b138c2f/frr-k8s-webhook-server/0.log" Nov 28 22:30:38 crc kubenswrapper[4957]: I1128 22:30:38.735627 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cdb7495d5-qqgdt_aa98f27d-5bda-41a4-bd59-1dff81ae7a65/manager/0.log" Nov 28 22:30:39 crc kubenswrapper[4957]: I1128 22:30:39.004709 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66c879d448-sm6v6_b0fbca5a-3b56-4822-9a82-5ec342b6b89a/webhook-server/0.log" Nov 28 22:30:39 crc kubenswrapper[4957]: I1128 22:30:39.220914 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dn826_df95d986-54c6-4e37-87f7-6775e4c24d4f/kube-rbac-proxy/0.log" Nov 28 22:30:39 crc kubenswrapper[4957]: I1128 22:30:39.783878 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dn826_df95d986-54c6-4e37-87f7-6775e4c24d4f/speaker/0.log" Nov 28 22:30:39 crc kubenswrapper[4957]: I1128 22:30:39.891018 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/frr/0.log" Nov 28 22:30:48 crc kubenswrapper[4957]: I1128 22:30:48.813380 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:30:48 crc kubenswrapper[4957]: E1128 22:30:48.814308 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:30:51 crc kubenswrapper[4957]: I1128 22:30:51.882318 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/util/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.047607 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/util/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.098039 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/pull/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.128339 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/pull/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.281833 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/pull/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.291427 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/extract/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.307402 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/util/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.476526 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/util/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.636941 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/pull/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.666187 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/pull/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.668501 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/util/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.810761 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/util/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.831763 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/pull/0.log" Nov 28 22:30:52 crc kubenswrapper[4957]: I1128 22:30:52.856321 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/extract/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.003479 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/util/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.197652 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/pull/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.202918 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/pull/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.214539 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/util/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.424759 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/extract/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.444458 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/pull/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.451024 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/util/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.583877 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/util/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.798410 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/pull/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.800897 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/util/0.log" Nov 28 22:30:53 crc kubenswrapper[4957]: I1128 22:30:53.837673 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/pull/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.182832 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/pull/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.182956 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/extract/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.213955 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/util/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.344742 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/util/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.516088 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/util/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.523194 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/pull/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.528205 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/pull/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.680544 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/util/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.713807 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/pull/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.725800 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/extract/0.log" Nov 28 22:30:54 crc kubenswrapper[4957]: I1128 22:30:54.877738 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-utilities/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.038021 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-content/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.044470 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-utilities/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.050293 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-content/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.233827 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-content/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.265054 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-utilities/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.454898 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-utilities/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.625467 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-content/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.644675 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-content/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.672602 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-utilities/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.884636 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/registry-server/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.910023 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-content/0.log" Nov 28 22:30:55 crc kubenswrapper[4957]: I1128 22:30:55.922112 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-utilities/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.101480 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rv2ws_01c31d76-bda9-44e6-b62a-04a154eeae84/marketplace-operator/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.213508 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-utilities/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.370969 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-content/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.384960 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-content/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.435183 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-utilities/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.647712 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-utilities/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.661885 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-content/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.696637 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/registry-server/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.843984 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-utilities/0.log" Nov 28 22:30:56 crc kubenswrapper[4957]: I1128 22:30:56.916002 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/registry-server/0.log" Nov 28 22:30:57 crc kubenswrapper[4957]: I1128 22:30:57.044256 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-utilities/0.log" Nov 28 22:30:57 crc kubenswrapper[4957]: I1128 22:30:57.051393 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-content/0.log" Nov 28 22:30:57 crc kubenswrapper[4957]: I1128 22:30:57.052511 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-content/0.log" Nov 28 22:30:57 crc kubenswrapper[4957]: I1128 22:30:57.196975 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-content/0.log" Nov 28 22:30:57 crc kubenswrapper[4957]: I1128 22:30:57.200009 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-utilities/0.log" Nov 28 22:30:57 crc kubenswrapper[4957]: I1128 22:30:57.803697 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/registry-server/0.log" Nov 28 22:31:03 crc kubenswrapper[4957]: I1128 22:31:03.814761 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:31:03 crc kubenswrapper[4957]: E1128 22:31:03.815872 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:31:08 crc kubenswrapper[4957]: I1128 22:31:08.754073 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-v86zm_cda43354-6472-4023-914d-dde633218f08/prometheus-operator/0.log" Nov 28 22:31:08 crc kubenswrapper[4957]: I1128 22:31:08.944488 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_6439437b-8d36-450e-87e0-9b394b0aa987/prometheus-operator-admission-webhook/0.log" Nov 28 22:31:09 crc kubenswrapper[4957]: I1128 22:31:09.020491 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_a78ee796-8b40-4db0-9834-a4d66c77f95a/prometheus-operator-admission-webhook/0.log" Nov 28 22:31:09 crc kubenswrapper[4957]: I1128 22:31:09.176869 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-sbrrc_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b/operator/0.log" Nov 28 22:31:09 crc kubenswrapper[4957]: I1128 22:31:09.205200 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-bhd2d_72d57747-268e-40db-85cc-98d5ed48a55f/observability-ui-dashboards/0.log" Nov 28 22:31:09 crc kubenswrapper[4957]: I1128 22:31:09.337680 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-jbfht_2841a3ed-5cfc-4a7b-a2bd-a3536018850f/perses-operator/0.log" Nov 28 22:31:17 crc kubenswrapper[4957]: I1128 22:31:17.813299 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:31:17 crc kubenswrapper[4957]: E1128 22:31:17.814490 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:31:20 crc kubenswrapper[4957]: I1128 22:31:20.889165 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/kube-rbac-proxy/0.log" Nov 28 22:31:20 crc kubenswrapper[4957]: I1128 22:31:20.971367 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/manager/0.log" Nov 28 22:31:28 crc kubenswrapper[4957]: I1128 22:31:28.815468 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:31:28 crc kubenswrapper[4957]: E1128 22:31:28.816350 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:31:40 crc kubenswrapper[4957]: I1128 22:31:40.813094 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:31:40 crc kubenswrapper[4957]: E1128 22:31:40.814084 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:31:53 crc kubenswrapper[4957]: I1128 22:31:53.813741 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:31:53 crc kubenswrapper[4957]: E1128 22:31:53.814618 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:32:06 crc kubenswrapper[4957]: I1128 22:32:06.818640 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:32:06 crc kubenswrapper[4957]: E1128 22:32:06.819514 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:32:20 crc kubenswrapper[4957]: I1128 22:32:20.823845 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:32:20 crc kubenswrapper[4957]: E1128 22:32:20.824597 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:32:34 crc kubenswrapper[4957]: I1128 22:32:34.813303 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:32:34 crc kubenswrapper[4957]: E1128 22:32:34.814166 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:32:47 crc kubenswrapper[4957]: I1128 22:32:47.820256 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:32:47 crc kubenswrapper[4957]: E1128 22:32:47.820955 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:33:00 crc kubenswrapper[4957]: I1128 22:33:00.822112 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:33:00 crc kubenswrapper[4957]: E1128 22:33:00.822982 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.459998 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ckjlb"] Nov 28 22:33:03 crc kubenswrapper[4957]: E1128 22:33:03.461252 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762ef65c-4618-4b3b-a129-d897dbde90de" containerName="collect-profiles" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.461276 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="762ef65c-4618-4b3b-a129-d897dbde90de" containerName="collect-profiles" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.461578 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="762ef65c-4618-4b3b-a129-d897dbde90de" containerName="collect-profiles" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.463765 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.480459 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckjlb"] Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.652201 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-catalog-content\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.652278 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-utilities\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.652371 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk9m\" (UniqueName: \"kubernetes.io/projected/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-kube-api-access-mpk9m\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.754148 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-catalog-content\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.754254 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-utilities\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.754366 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk9m\" (UniqueName: \"kubernetes.io/projected/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-kube-api-access-mpk9m\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.755799 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-catalog-content\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.756038 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-utilities\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.780151 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk9m\" (UniqueName: \"kubernetes.io/projected/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-kube-api-access-mpk9m\") pod \"redhat-marketplace-ckjlb\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:03 crc kubenswrapper[4957]: I1128 22:33:03.792948 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:04 crc kubenswrapper[4957]: I1128 22:33:04.289721 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckjlb"] Nov 28 22:33:04 crc kubenswrapper[4957]: I1128 22:33:04.591816 4957 generic.go:334] "Generic (PLEG): container finished" podID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerID="6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c" exitCode=0 Nov 28 22:33:04 crc kubenswrapper[4957]: I1128 22:33:04.592031 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckjlb" event={"ID":"3cc675ba-618d-4483-80e6-5fcdca4e3f0f","Type":"ContainerDied","Data":"6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c"} Nov 28 22:33:04 crc kubenswrapper[4957]: I1128 22:33:04.593409 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckjlb" event={"ID":"3cc675ba-618d-4483-80e6-5fcdca4e3f0f","Type":"ContainerStarted","Data":"08d9cdd81ab1bff27c7dce1b77bf40ce90ca0be6d619079461be566bba2e9a28"} Nov 28 22:33:04 crc kubenswrapper[4957]: I1128 22:33:04.594026 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 22:33:05 crc kubenswrapper[4957]: I1128 22:33:05.606097 4957 generic.go:334] "Generic (PLEG): container finished" podID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerID="3a08321aebb1a18356caeb49687cf00cc868531f530a994158c0b86360ce6ba9" exitCode=0 Nov 28 22:33:05 crc kubenswrapper[4957]: I1128 22:33:05.606141 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-krdfw/must-gather-9j9tl" event={"ID":"62ed8424-d08d-4c25-b34c-aae91b72378d","Type":"ContainerDied","Data":"3a08321aebb1a18356caeb49687cf00cc868531f530a994158c0b86360ce6ba9"} Nov 28 22:33:05 crc kubenswrapper[4957]: I1128 22:33:05.608352 4957 scope.go:117] "RemoveContainer" containerID="3a08321aebb1a18356caeb49687cf00cc868531f530a994158c0b86360ce6ba9" Nov 28 22:33:06 crc kubenswrapper[4957]: I1128 22:33:06.483917 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-krdfw_must-gather-9j9tl_62ed8424-d08d-4c25-b34c-aae91b72378d/gather/0.log" Nov 28 22:33:06 crc kubenswrapper[4957]: I1128 22:33:06.617751 4957 generic.go:334] "Generic (PLEG): container finished" podID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerID="019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add" exitCode=0 Nov 28 22:33:06 crc kubenswrapper[4957]: I1128 22:33:06.617795 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckjlb" event={"ID":"3cc675ba-618d-4483-80e6-5fcdca4e3f0f","Type":"ContainerDied","Data":"019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add"} Nov 28 22:33:07 crc kubenswrapper[4957]: I1128 22:33:07.636528 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckjlb" event={"ID":"3cc675ba-618d-4483-80e6-5fcdca4e3f0f","Type":"ContainerStarted","Data":"1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443"} Nov 28 22:33:07 crc kubenswrapper[4957]: I1128 22:33:07.656820 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ckjlb" podStartSLOduration=2.187937336 podStartE2EDuration="4.656803297s" podCreationTimestamp="2025-11-28 22:33:03 +0000 UTC" firstStartedPulling="2025-11-28 22:33:04.593492485 +0000 UTC m=+6224.062140394" lastFinishedPulling="2025-11-28 22:33:07.062358446 +0000 UTC m=+6226.531006355" observedRunningTime="2025-11-28 22:33:07.654618704 +0000 UTC m=+6227.123266633" watchObservedRunningTime="2025-11-28 22:33:07.656803297 +0000 UTC m=+6227.125451206" Nov 28 22:33:13 crc kubenswrapper[4957]: I1128 22:33:13.794239 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:13 crc kubenswrapper[4957]: I1128 22:33:13.794900 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:13 crc kubenswrapper[4957]: I1128 22:33:13.878068 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.562677 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-krdfw/must-gather-9j9tl"] Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.563183 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-krdfw/must-gather-9j9tl" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="copy" containerID="cri-o://752697841de6889fd13ca8e4737b02f4f37769b3aa245a7d769aee2ca67d8010" gracePeriod=2 Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.573537 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-krdfw/must-gather-9j9tl"] Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.721033 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-krdfw_must-gather-9j9tl_62ed8424-d08d-4c25-b34c-aae91b72378d/copy/0.log" Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.721934 4957 generic.go:334] "Generic (PLEG): container finished" podID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerID="752697841de6889fd13ca8e4737b02f4f37769b3aa245a7d769aee2ca67d8010" exitCode=143 Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.802383 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.816051 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:33:14 crc kubenswrapper[4957]: E1128 22:33:14.816409 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:33:14 crc kubenswrapper[4957]: I1128 22:33:14.863892 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckjlb"] Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.037197 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-krdfw_must-gather-9j9tl_62ed8424-d08d-4c25-b34c-aae91b72378d/copy/0.log" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.037579 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.125043 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62ed8424-d08d-4c25-b34c-aae91b72378d-must-gather-output\") pod \"62ed8424-d08d-4c25-b34c-aae91b72378d\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.125299 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bffjs\" (UniqueName: \"kubernetes.io/projected/62ed8424-d08d-4c25-b34c-aae91b72378d-kube-api-access-bffjs\") pod \"62ed8424-d08d-4c25-b34c-aae91b72378d\" (UID: \"62ed8424-d08d-4c25-b34c-aae91b72378d\") " Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.133366 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ed8424-d08d-4c25-b34c-aae91b72378d-kube-api-access-bffjs" (OuterVolumeSpecName: "kube-api-access-bffjs") pod "62ed8424-d08d-4c25-b34c-aae91b72378d" (UID: "62ed8424-d08d-4c25-b34c-aae91b72378d"). InnerVolumeSpecName "kube-api-access-bffjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.229047 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bffjs\" (UniqueName: \"kubernetes.io/projected/62ed8424-d08d-4c25-b34c-aae91b72378d-kube-api-access-bffjs\") on node \"crc\" DevicePath \"\"" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.259412 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ed8424-d08d-4c25-b34c-aae91b72378d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "62ed8424-d08d-4c25-b34c-aae91b72378d" (UID: "62ed8424-d08d-4c25-b34c-aae91b72378d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.331036 4957 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/62ed8424-d08d-4c25-b34c-aae91b72378d-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.734082 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-krdfw_must-gather-9j9tl_62ed8424-d08d-4c25-b34c-aae91b72378d/copy/0.log" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.734554 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-krdfw/must-gather-9j9tl" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.734561 4957 scope.go:117] "RemoveContainer" containerID="752697841de6889fd13ca8e4737b02f4f37769b3aa245a7d769aee2ca67d8010" Nov 28 22:33:15 crc kubenswrapper[4957]: I1128 22:33:15.769943 4957 scope.go:117] "RemoveContainer" containerID="3a08321aebb1a18356caeb49687cf00cc868531f530a994158c0b86360ce6ba9" Nov 28 22:33:16 crc kubenswrapper[4957]: I1128 22:33:16.747424 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ckjlb" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="registry-server" containerID="cri-o://1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443" gracePeriod=2 Nov 28 22:33:16 crc kubenswrapper[4957]: I1128 22:33:16.832194 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" path="/var/lib/kubelet/pods/62ed8424-d08d-4c25-b34c-aae91b72378d/volumes" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.252979 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.386499 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-utilities\") pod \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.386636 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpk9m\" (UniqueName: \"kubernetes.io/projected/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-kube-api-access-mpk9m\") pod \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.386663 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-catalog-content\") pod \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\" (UID: \"3cc675ba-618d-4483-80e6-5fcdca4e3f0f\") " Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.387789 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-utilities" (OuterVolumeSpecName: "utilities") pod "3cc675ba-618d-4483-80e6-5fcdca4e3f0f" (UID: "3cc675ba-618d-4483-80e6-5fcdca4e3f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.395050 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-kube-api-access-mpk9m" (OuterVolumeSpecName: "kube-api-access-mpk9m") pod "3cc675ba-618d-4483-80e6-5fcdca4e3f0f" (UID: "3cc675ba-618d-4483-80e6-5fcdca4e3f0f"). InnerVolumeSpecName "kube-api-access-mpk9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.403746 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cc675ba-618d-4483-80e6-5fcdca4e3f0f" (UID: "3cc675ba-618d-4483-80e6-5fcdca4e3f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.489857 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.489889 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpk9m\" (UniqueName: \"kubernetes.io/projected/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-kube-api-access-mpk9m\") on node \"crc\" DevicePath \"\"" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.489900 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc675ba-618d-4483-80e6-5fcdca4e3f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.760242 4957 generic.go:334] "Generic (PLEG): container finished" podID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerID="1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443" exitCode=0 Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.760248 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckjlb" event={"ID":"3cc675ba-618d-4483-80e6-5fcdca4e3f0f","Type":"ContainerDied","Data":"1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443"} Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.760367 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckjlb" event={"ID":"3cc675ba-618d-4483-80e6-5fcdca4e3f0f","Type":"ContainerDied","Data":"08d9cdd81ab1bff27c7dce1b77bf40ce90ca0be6d619079461be566bba2e9a28"} Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.760391 4957 scope.go:117] "RemoveContainer" containerID="1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.760290 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckjlb" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.785558 4957 scope.go:117] "RemoveContainer" containerID="019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.842622 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckjlb"] Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.853095 4957 scope.go:117] "RemoveContainer" containerID="6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c" Nov 28 22:33:17 crc kubenswrapper[4957]: I1128 22:33:17.870583 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckjlb"] Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.047570 4957 scope.go:117] "RemoveContainer" containerID="1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443" Nov 28 22:33:18 crc kubenswrapper[4957]: E1128 22:33:18.048883 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443\": container with ID starting with 1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443 not found: ID does not exist" containerID="1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443" Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.049118 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443"} err="failed to get container status \"1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443\": rpc error: code = NotFound desc = could not find container \"1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443\": container with ID starting with 1782b4bc64fab69e004a8d4a082ff038243490972acdeea5173ee46aaae27443 not found: ID does not exist" Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.049146 4957 scope.go:117] "RemoveContainer" containerID="019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add" Nov 28 22:33:18 crc kubenswrapper[4957]: E1128 22:33:18.049684 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add\": container with ID starting with 019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add not found: ID does not exist" containerID="019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add" Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.049721 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add"} err="failed to get container status \"019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add\": rpc error: code = NotFound desc = could not find container \"019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add\": container with ID starting with 019a6731d07faa930b1de858a0c6c51de2e9209e757a16301b001a19c0cd5add not found: ID does not exist" Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.049746 4957 scope.go:117] "RemoveContainer" containerID="6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c" Nov 28 22:33:18 crc kubenswrapper[4957]: E1128 22:33:18.050415 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c\": container with ID starting with 6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c not found: ID does not exist" containerID="6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c" Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.050453 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c"} err="failed to get container status \"6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c\": rpc error: code = NotFound desc = could not find container \"6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c\": container with ID starting with 6bf24d3cfad0f0f08e6d1853d3968686dc63c4687dc9306d978b924f7d1a324c not found: ID does not exist" Nov 28 22:33:18 crc kubenswrapper[4957]: I1128 22:33:18.827390 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" path="/var/lib/kubelet/pods/3cc675ba-618d-4483-80e6-5fcdca4e3f0f/volumes" Nov 28 22:33:28 crc kubenswrapper[4957]: I1128 22:33:28.813348 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:33:28 crc kubenswrapper[4957]: E1128 22:33:28.814385 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:33:31 crc kubenswrapper[4957]: I1128 22:33:31.577311 4957 scope.go:117] "RemoveContainer" containerID="9f38608e1f8c2710094d89a50b418fc42d72bfba0684118f295f9879a1143e7c" Nov 28 22:33:40 crc kubenswrapper[4957]: I1128 22:33:40.823042 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:33:40 crc kubenswrapper[4957]: E1128 22:33:40.824535 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:33:52 crc kubenswrapper[4957]: I1128 22:33:52.813204 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:33:52 crc kubenswrapper[4957]: E1128 22:33:52.814142 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:34:04 crc kubenswrapper[4957]: I1128 22:34:04.813041 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:34:04 crc kubenswrapper[4957]: E1128 22:34:04.813939 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:34:16 crc kubenswrapper[4957]: I1128 22:34:16.813491 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:34:16 crc kubenswrapper[4957]: E1128 22:34:16.814262 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:34:27 crc kubenswrapper[4957]: I1128 22:34:27.812797 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:34:27 crc kubenswrapper[4957]: E1128 22:34:27.813596 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:34:31 crc kubenswrapper[4957]: I1128 22:34:31.673493 4957 scope.go:117] "RemoveContainer" containerID="f83b80b4699e1fc3a85f68c6adb06a271c897bfd8d08e3e393259a7b06ac17b9" Nov 28 22:34:40 crc kubenswrapper[4957]: I1128 22:34:40.823830 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:34:41 crc kubenswrapper[4957]: I1128 22:34:41.673734 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"6ac70cd5de2ed775b4637d8cd0e97b2bfbea648df82113649dcc60c5cda7ca51"} Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.437723 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pvbdh/must-gather-p8n4m"] Nov 28 22:36:12 crc kubenswrapper[4957]: E1128 22:36:12.439102 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="extract-content" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439266 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="extract-content" Nov 28 22:36:12 crc kubenswrapper[4957]: E1128 22:36:12.439284 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="extract-utilities" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439292 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="extract-utilities" Nov 28 22:36:12 crc kubenswrapper[4957]: E1128 22:36:12.439320 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="copy" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439329 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="copy" Nov 28 22:36:12 crc kubenswrapper[4957]: E1128 22:36:12.439361 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="gather" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439369 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="gather" Nov 28 22:36:12 crc kubenswrapper[4957]: E1128 22:36:12.439396 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="registry-server" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439405 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="registry-server" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439713 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="copy" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439743 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc675ba-618d-4483-80e6-5fcdca4e3f0f" containerName="registry-server" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.439757 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ed8424-d08d-4c25-b34c-aae91b72378d" containerName="gather" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.447608 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.456340 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pvbdh"/"openshift-service-ca.crt" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.456472 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pvbdh"/"kube-root-ca.crt" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.456699 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pvbdh"/"default-dockercfg-h5dzs" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.463370 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pvbdh/must-gather-p8n4m"] Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.582398 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbq7x\" (UniqueName: \"kubernetes.io/projected/73621557-cdd8-489b-a534-c85c8cb66e46-kube-api-access-hbq7x\") pod \"must-gather-p8n4m\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.582488 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73621557-cdd8-489b-a534-c85c8cb66e46-must-gather-output\") pod \"must-gather-p8n4m\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.684668 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbq7x\" (UniqueName: \"kubernetes.io/projected/73621557-cdd8-489b-a534-c85c8cb66e46-kube-api-access-hbq7x\") pod \"must-gather-p8n4m\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.684775 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73621557-cdd8-489b-a534-c85c8cb66e46-must-gather-output\") pod \"must-gather-p8n4m\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.685364 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73621557-cdd8-489b-a534-c85c8cb66e46-must-gather-output\") pod \"must-gather-p8n4m\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.712196 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbq7x\" (UniqueName: \"kubernetes.io/projected/73621557-cdd8-489b-a534-c85c8cb66e46-kube-api-access-hbq7x\") pod \"must-gather-p8n4m\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:12 crc kubenswrapper[4957]: I1128 22:36:12.788934 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:36:13 crc kubenswrapper[4957]: I1128 22:36:13.270651 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pvbdh/must-gather-p8n4m"] Nov 28 22:36:13 crc kubenswrapper[4957]: I1128 22:36:13.679474 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" event={"ID":"73621557-cdd8-489b-a534-c85c8cb66e46","Type":"ContainerStarted","Data":"b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65"} Nov 28 22:36:13 crc kubenswrapper[4957]: I1128 22:36:13.679967 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" event={"ID":"73621557-cdd8-489b-a534-c85c8cb66e46","Type":"ContainerStarted","Data":"e4321a1ec46f040df5323df9ebcf36b2e0860ea11ebd8c3b6c3846b9258175f9"} Nov 28 22:36:14 crc kubenswrapper[4957]: I1128 22:36:14.691446 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" event={"ID":"73621557-cdd8-489b-a534-c85c8cb66e46","Type":"ContainerStarted","Data":"387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe"} Nov 28 22:36:14 crc kubenswrapper[4957]: I1128 22:36:14.721973 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" podStartSLOduration=2.721949863 podStartE2EDuration="2.721949863s" podCreationTimestamp="2025-11-28 22:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 22:36:14.70396113 +0000 UTC m=+6414.172609039" watchObservedRunningTime="2025-11-28 22:36:14.721949863 +0000 UTC m=+6414.190597792" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.205537 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-br2wv"] Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.207638 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.301890 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqpl\" (UniqueName: \"kubernetes.io/projected/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-kube-api-access-lvqpl\") pod \"crc-debug-br2wv\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.302071 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-host\") pod \"crc-debug-br2wv\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.404396 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvqpl\" (UniqueName: \"kubernetes.io/projected/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-kube-api-access-lvqpl\") pod \"crc-debug-br2wv\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.404583 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-host\") pod \"crc-debug-br2wv\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.405081 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-host\") pod \"crc-debug-br2wv\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.424706 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvqpl\" (UniqueName: \"kubernetes.io/projected/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-kube-api-access-lvqpl\") pod \"crc-debug-br2wv\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.529854 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:36:17 crc kubenswrapper[4957]: I1128 22:36:17.730376 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" event={"ID":"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f","Type":"ContainerStarted","Data":"be152459faf76cdf422bba1e2a1e3426fef04de7b8f6ffcdf1a43f4775eec873"} Nov 28 22:36:18 crc kubenswrapper[4957]: I1128 22:36:18.742793 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" event={"ID":"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f","Type":"ContainerStarted","Data":"776bd15fdc8b791bb7b7134dbe3a8a9e03340f5036c44313d1c838b05cbe2c4c"} Nov 28 22:36:18 crc kubenswrapper[4957]: I1128 22:36:18.758755 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" podStartSLOduration=1.7587327259999999 podStartE2EDuration="1.758732726s" podCreationTimestamp="2025-11-28 22:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 22:36:18.755351663 +0000 UTC m=+6418.223999572" watchObservedRunningTime="2025-11-28 22:36:18.758732726 +0000 UTC m=+6418.227380635" Nov 28 22:36:59 crc kubenswrapper[4957]: I1128 22:36:59.193560 4957 generic.go:334] "Generic (PLEG): container finished" podID="e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" containerID="776bd15fdc8b791bb7b7134dbe3a8a9e03340f5036c44313d1c838b05cbe2c4c" exitCode=0 Nov 28 22:36:59 crc kubenswrapper[4957]: I1128 22:36:59.193588 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" event={"ID":"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f","Type":"ContainerDied","Data":"776bd15fdc8b791bb7b7134dbe3a8a9e03340f5036c44313d1c838b05cbe2c4c"} Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.344993 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.386548 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-br2wv"] Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.401030 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-br2wv"] Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.411305 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvqpl\" (UniqueName: \"kubernetes.io/projected/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-kube-api-access-lvqpl\") pod \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.411554 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-host\") pod \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\" (UID: \"e9819b62-a861-4a8f-a4d2-1bc968c2aa0f\") " Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.411698 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-host" (OuterVolumeSpecName: "host") pod "e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" (UID: "e9819b62-a861-4a8f-a4d2-1bc968c2aa0f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.412130 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-host\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.417239 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-kube-api-access-lvqpl" (OuterVolumeSpecName: "kube-api-access-lvqpl") pod "e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" (UID: "e9819b62-a861-4a8f-a4d2-1bc968c2aa0f"). InnerVolumeSpecName "kube-api-access-lvqpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.514276 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvqpl\" (UniqueName: \"kubernetes.io/projected/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f-kube-api-access-lvqpl\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:00 crc kubenswrapper[4957]: I1128 22:37:00.825918 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" path="/var/lib/kubelet/pods/e9819b62-a861-4a8f-a4d2-1bc968c2aa0f/volumes" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.215483 4957 scope.go:117] "RemoveContainer" containerID="776bd15fdc8b791bb7b7134dbe3a8a9e03340f5036c44313d1c838b05cbe2c4c" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.215519 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-br2wv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.556519 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-dd2hv"] Nov 28 22:37:01 crc kubenswrapper[4957]: E1128 22:37:01.557304 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" containerName="container-00" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.557319 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" containerName="container-00" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.557605 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9819b62-a861-4a8f-a4d2-1bc968c2aa0f" containerName="container-00" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.558451 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.638453 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pck\" (UniqueName: \"kubernetes.io/projected/547a5811-7041-40a4-9b71-5704e101c02c-kube-api-access-c8pck\") pod \"crc-debug-dd2hv\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.638796 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547a5811-7041-40a4-9b71-5704e101c02c-host\") pod \"crc-debug-dd2hv\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.741608 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pck\" (UniqueName: \"kubernetes.io/projected/547a5811-7041-40a4-9b71-5704e101c02c-kube-api-access-c8pck\") pod \"crc-debug-dd2hv\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.741725 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547a5811-7041-40a4-9b71-5704e101c02c-host\") pod \"crc-debug-dd2hv\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.741881 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547a5811-7041-40a4-9b71-5704e101c02c-host\") pod \"crc-debug-dd2hv\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.759536 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pck\" (UniqueName: \"kubernetes.io/projected/547a5811-7041-40a4-9b71-5704e101c02c-kube-api-access-c8pck\") pod \"crc-debug-dd2hv\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:01 crc kubenswrapper[4957]: I1128 22:37:01.879983 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:02 crc kubenswrapper[4957]: I1128 22:37:02.231228 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" event={"ID":"547a5811-7041-40a4-9b71-5704e101c02c","Type":"ContainerStarted","Data":"558773d661b96ac7f1cd62e7675befb6e0c86f5ede1e57a352c74bb3ff50fabc"} Nov 28 22:37:03 crc kubenswrapper[4957]: I1128 22:37:03.247874 4957 generic.go:334] "Generic (PLEG): container finished" podID="547a5811-7041-40a4-9b71-5704e101c02c" containerID="1fa7ca4de12ae0f0af8e21af4f6c258223ee1218d94c840d2875a2ed965fce22" exitCode=0 Nov 28 22:37:03 crc kubenswrapper[4957]: I1128 22:37:03.247977 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" event={"ID":"547a5811-7041-40a4-9b71-5704e101c02c","Type":"ContainerDied","Data":"1fa7ca4de12ae0f0af8e21af4f6c258223ee1218d94c840d2875a2ed965fce22"} Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.386235 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.514903 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547a5811-7041-40a4-9b71-5704e101c02c-host\") pod \"547a5811-7041-40a4-9b71-5704e101c02c\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.514985 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/547a5811-7041-40a4-9b71-5704e101c02c-host" (OuterVolumeSpecName: "host") pod "547a5811-7041-40a4-9b71-5704e101c02c" (UID: "547a5811-7041-40a4-9b71-5704e101c02c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.515252 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8pck\" (UniqueName: \"kubernetes.io/projected/547a5811-7041-40a4-9b71-5704e101c02c-kube-api-access-c8pck\") pod \"547a5811-7041-40a4-9b71-5704e101c02c\" (UID: \"547a5811-7041-40a4-9b71-5704e101c02c\") " Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.515801 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547a5811-7041-40a4-9b71-5704e101c02c-host\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.520724 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547a5811-7041-40a4-9b71-5704e101c02c-kube-api-access-c8pck" (OuterVolumeSpecName: "kube-api-access-c8pck") pod "547a5811-7041-40a4-9b71-5704e101c02c" (UID: "547a5811-7041-40a4-9b71-5704e101c02c"). InnerVolumeSpecName "kube-api-access-c8pck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.617625 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8pck\" (UniqueName: \"kubernetes.io/projected/547a5811-7041-40a4-9b71-5704e101c02c-kube-api-access-c8pck\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.660658 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4mvb"] Nov 28 22:37:04 crc kubenswrapper[4957]: E1128 22:37:04.661337 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547a5811-7041-40a4-9b71-5704e101c02c" containerName="container-00" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.661359 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="547a5811-7041-40a4-9b71-5704e101c02c" containerName="container-00" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.661624 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="547a5811-7041-40a4-9b71-5704e101c02c" containerName="container-00" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.663624 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.708139 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4mvb"] Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.721338 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-utilities\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.721515 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmmh\" (UniqueName: \"kubernetes.io/projected/99c9ae0c-c01f-4581-89fe-9b1006a25e28-kube-api-access-slmmh\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.721654 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.824701 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmmh\" (UniqueName: \"kubernetes.io/projected/99c9ae0c-c01f-4581-89fe-9b1006a25e28-kube-api-access-slmmh\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.824815 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.824921 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-utilities\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.825662 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.825824 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-utilities\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.853582 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmmh\" (UniqueName: \"kubernetes.io/projected/99c9ae0c-c01f-4581-89fe-9b1006a25e28-kube-api-access-slmmh\") pod \"redhat-operators-b4mvb\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:04 crc kubenswrapper[4957]: I1128 22:37:04.994912 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:05 crc kubenswrapper[4957]: I1128 22:37:05.321720 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" event={"ID":"547a5811-7041-40a4-9b71-5704e101c02c","Type":"ContainerDied","Data":"558773d661b96ac7f1cd62e7675befb6e0c86f5ede1e57a352c74bb3ff50fabc"} Nov 28 22:37:05 crc kubenswrapper[4957]: I1128 22:37:05.321764 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="558773d661b96ac7f1cd62e7675befb6e0c86f5ede1e57a352c74bb3ff50fabc" Nov 28 22:37:05 crc kubenswrapper[4957]: I1128 22:37:05.321839 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-dd2hv" Nov 28 22:37:05 crc kubenswrapper[4957]: W1128 22:37:05.688840 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99c9ae0c_c01f_4581_89fe_9b1006a25e28.slice/crio-2fd501d38a7f827aecc19b1e80e1aa1e847028c313cd7ef13631c9c6bafae330 WatchSource:0}: Error finding container 2fd501d38a7f827aecc19b1e80e1aa1e847028c313cd7ef13631c9c6bafae330: Status 404 returned error can't find the container with id 2fd501d38a7f827aecc19b1e80e1aa1e847028c313cd7ef13631c9c6bafae330 Nov 28 22:37:05 crc kubenswrapper[4957]: I1128 22:37:05.700037 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4mvb"] Nov 28 22:37:05 crc kubenswrapper[4957]: I1128 22:37:05.723404 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-dd2hv"] Nov 28 22:37:05 crc kubenswrapper[4957]: I1128 22:37:05.739506 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-dd2hv"] Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.332844 4957 generic.go:334] "Generic (PLEG): container finished" podID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerID="76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc" exitCode=0 Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.332952 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerDied","Data":"76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc"} Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.333178 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerStarted","Data":"2fd501d38a7f827aecc19b1e80e1aa1e847028c313cd7ef13631c9c6bafae330"} Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.824701 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547a5811-7041-40a4-9b71-5704e101c02c" path="/var/lib/kubelet/pods/547a5811-7041-40a4-9b71-5704e101c02c/volumes" Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.922461 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-zwj74"] Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.924128 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.995300 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khngm\" (UniqueName: \"kubernetes.io/projected/d73f35f2-4d06-40ff-8746-a4071b243700-kube-api-access-khngm\") pod \"crc-debug-zwj74\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:06 crc kubenswrapper[4957]: I1128 22:37:06.995454 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d73f35f2-4d06-40ff-8746-a4071b243700-host\") pod \"crc-debug-zwj74\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.097474 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khngm\" (UniqueName: \"kubernetes.io/projected/d73f35f2-4d06-40ff-8746-a4071b243700-kube-api-access-khngm\") pod \"crc-debug-zwj74\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.097577 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d73f35f2-4d06-40ff-8746-a4071b243700-host\") pod \"crc-debug-zwj74\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.097758 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d73f35f2-4d06-40ff-8746-a4071b243700-host\") pod \"crc-debug-zwj74\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.124588 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khngm\" (UniqueName: \"kubernetes.io/projected/d73f35f2-4d06-40ff-8746-a4071b243700-kube-api-access-khngm\") pod \"crc-debug-zwj74\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.242942 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:07 crc kubenswrapper[4957]: W1128 22:37:07.279368 4957 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd73f35f2_4d06_40ff_8746_a4071b243700.slice/crio-654a2df1c2379a24f389272ba034e8b5caa9d9823c232f65c9a60f462bcb3ba1 WatchSource:0}: Error finding container 654a2df1c2379a24f389272ba034e8b5caa9d9823c232f65c9a60f462bcb3ba1: Status 404 returned error can't find the container with id 654a2df1c2379a24f389272ba034e8b5caa9d9823c232f65c9a60f462bcb3ba1 Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.347180 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerStarted","Data":"cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827"} Nov 28 22:37:07 crc kubenswrapper[4957]: I1128 22:37:07.349880 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-zwj74" event={"ID":"d73f35f2-4d06-40ff-8746-a4071b243700","Type":"ContainerStarted","Data":"654a2df1c2379a24f389272ba034e8b5caa9d9823c232f65c9a60f462bcb3ba1"} Nov 28 22:37:08 crc kubenswrapper[4957]: I1128 22:37:08.363749 4957 generic.go:334] "Generic (PLEG): container finished" podID="d73f35f2-4d06-40ff-8746-a4071b243700" containerID="eff8e6414b6ccfa2208e5f490da8a80f3648d2faed70b4f0c3ab69773f9ce574" exitCode=0 Nov 28 22:37:08 crc kubenswrapper[4957]: I1128 22:37:08.363812 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/crc-debug-zwj74" event={"ID":"d73f35f2-4d06-40ff-8746-a4071b243700","Type":"ContainerDied","Data":"eff8e6414b6ccfa2208e5f490da8a80f3648d2faed70b4f0c3ab69773f9ce574"} Nov 28 22:37:08 crc kubenswrapper[4957]: I1128 22:37:08.449197 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-zwj74"] Nov 28 22:37:08 crc kubenswrapper[4957]: I1128 22:37:08.461492 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pvbdh/crc-debug-zwj74"] Nov 28 22:37:08 crc kubenswrapper[4957]: I1128 22:37:08.992635 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:37:08 crc kubenswrapper[4957]: I1128 22:37:08.992694 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.712137 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.858874 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khngm\" (UniqueName: \"kubernetes.io/projected/d73f35f2-4d06-40ff-8746-a4071b243700-kube-api-access-khngm\") pod \"d73f35f2-4d06-40ff-8746-a4071b243700\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.859267 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d73f35f2-4d06-40ff-8746-a4071b243700-host\") pod \"d73f35f2-4d06-40ff-8746-a4071b243700\" (UID: \"d73f35f2-4d06-40ff-8746-a4071b243700\") " Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.859800 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d73f35f2-4d06-40ff-8746-a4071b243700-host" (OuterVolumeSpecName: "host") pod "d73f35f2-4d06-40ff-8746-a4071b243700" (UID: "d73f35f2-4d06-40ff-8746-a4071b243700"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.860038 4957 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d73f35f2-4d06-40ff-8746-a4071b243700-host\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.864469 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73f35f2-4d06-40ff-8746-a4071b243700-kube-api-access-khngm" (OuterVolumeSpecName: "kube-api-access-khngm") pod "d73f35f2-4d06-40ff-8746-a4071b243700" (UID: "d73f35f2-4d06-40ff-8746-a4071b243700"). InnerVolumeSpecName "kube-api-access-khngm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:37:09 crc kubenswrapper[4957]: I1128 22:37:09.962472 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khngm\" (UniqueName: \"kubernetes.io/projected/d73f35f2-4d06-40ff-8746-a4071b243700-kube-api-access-khngm\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:10 crc kubenswrapper[4957]: I1128 22:37:10.394401 4957 generic.go:334] "Generic (PLEG): container finished" podID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerID="cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827" exitCode=0 Nov 28 22:37:10 crc kubenswrapper[4957]: I1128 22:37:10.394437 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerDied","Data":"cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827"} Nov 28 22:37:10 crc kubenswrapper[4957]: I1128 22:37:10.398006 4957 scope.go:117] "RemoveContainer" containerID="eff8e6414b6ccfa2208e5f490da8a80f3648d2faed70b4f0c3ab69773f9ce574" Nov 28 22:37:10 crc kubenswrapper[4957]: I1128 22:37:10.398109 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/crc-debug-zwj74" Nov 28 22:37:10 crc kubenswrapper[4957]: I1128 22:37:10.826985 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d73f35f2-4d06-40ff-8746-a4071b243700" path="/var/lib/kubelet/pods/d73f35f2-4d06-40ff-8746-a4071b243700/volumes" Nov 28 22:37:11 crc kubenswrapper[4957]: I1128 22:37:11.413821 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerStarted","Data":"e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47"} Nov 28 22:37:11 crc kubenswrapper[4957]: I1128 22:37:11.446481 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4mvb" podStartSLOduration=2.670017412 podStartE2EDuration="7.446459316s" podCreationTimestamp="2025-11-28 22:37:04 +0000 UTC" firstStartedPulling="2025-11-28 22:37:06.334977218 +0000 UTC m=+6465.803625127" lastFinishedPulling="2025-11-28 22:37:11.111419122 +0000 UTC m=+6470.580067031" observedRunningTime="2025-11-28 22:37:11.434901821 +0000 UTC m=+6470.903549730" watchObservedRunningTime="2025-11-28 22:37:11.446459316 +0000 UTC m=+6470.915107225" Nov 28 22:37:14 crc kubenswrapper[4957]: I1128 22:37:14.996056 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:14 crc kubenswrapper[4957]: I1128 22:37:14.996724 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:16 crc kubenswrapper[4957]: I1128 22:37:16.055619 4957 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b4mvb" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="registry-server" probeResult="failure" output=< Nov 28 22:37:16 crc kubenswrapper[4957]: timeout: failed to connect service ":50051" within 1s Nov 28 22:37:16 crc kubenswrapper[4957]: > Nov 28 22:37:25 crc kubenswrapper[4957]: I1128 22:37:25.048555 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:25 crc kubenswrapper[4957]: I1128 22:37:25.102039 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:25 crc kubenswrapper[4957]: I1128 22:37:25.300288 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4mvb"] Nov 28 22:37:26 crc kubenswrapper[4957]: I1128 22:37:26.593975 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4mvb" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="registry-server" containerID="cri-o://e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47" gracePeriod=2 Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.242646 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.388539 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-utilities\") pod \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.388605 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content\") pod \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.388669 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slmmh\" (UniqueName: \"kubernetes.io/projected/99c9ae0c-c01f-4581-89fe-9b1006a25e28-kube-api-access-slmmh\") pod \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.391003 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-utilities" (OuterVolumeSpecName: "utilities") pod "99c9ae0c-c01f-4581-89fe-9b1006a25e28" (UID: "99c9ae0c-c01f-4581-89fe-9b1006a25e28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.399503 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c9ae0c-c01f-4581-89fe-9b1006a25e28-kube-api-access-slmmh" (OuterVolumeSpecName: "kube-api-access-slmmh") pod "99c9ae0c-c01f-4581-89fe-9b1006a25e28" (UID: "99c9ae0c-c01f-4581-89fe-9b1006a25e28"). InnerVolumeSpecName "kube-api-access-slmmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.490491 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99c9ae0c-c01f-4581-89fe-9b1006a25e28" (UID: "99c9ae0c-c01f-4581-89fe-9b1006a25e28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.490926 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content\") pod \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\" (UID: \"99c9ae0c-c01f-4581-89fe-9b1006a25e28\") " Nov 28 22:37:27 crc kubenswrapper[4957]: W1128 22:37:27.491448 4957 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/99c9ae0c-c01f-4581-89fe-9b1006a25e28/volumes/kubernetes.io~empty-dir/catalog-content Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.491471 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99c9ae0c-c01f-4581-89fe-9b1006a25e28" (UID: "99c9ae0c-c01f-4581-89fe-9b1006a25e28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.492162 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.492184 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c9ae0c-c01f-4581-89fe-9b1006a25e28-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.492196 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slmmh\" (UniqueName: \"kubernetes.io/projected/99c9ae0c-c01f-4581-89fe-9b1006a25e28-kube-api-access-slmmh\") on node \"crc\" DevicePath \"\"" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.609377 4957 generic.go:334] "Generic (PLEG): container finished" podID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerID="e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47" exitCode=0 Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.609427 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4mvb" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.609457 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerDied","Data":"e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47"} Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.610292 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4mvb" event={"ID":"99c9ae0c-c01f-4581-89fe-9b1006a25e28","Type":"ContainerDied","Data":"2fd501d38a7f827aecc19b1e80e1aa1e847028c313cd7ef13631c9c6bafae330"} Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.610323 4957 scope.go:117] "RemoveContainer" containerID="e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.656294 4957 scope.go:117] "RemoveContainer" containerID="cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.671666 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4mvb"] Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.684876 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4mvb"] Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.685745 4957 scope.go:117] "RemoveContainer" containerID="76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.753957 4957 scope.go:117] "RemoveContainer" containerID="e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47" Nov 28 22:37:27 crc kubenswrapper[4957]: E1128 22:37:27.754467 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47\": container with ID starting with e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47 not found: ID does not exist" containerID="e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.754512 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47"} err="failed to get container status \"e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47\": rpc error: code = NotFound desc = could not find container \"e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47\": container with ID starting with e8af1d0374d530d058494875eb51460a34232b6262f4cda88fff8192d5e3df47 not found: ID does not exist" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.754556 4957 scope.go:117] "RemoveContainer" containerID="cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827" Nov 28 22:37:27 crc kubenswrapper[4957]: E1128 22:37:27.755063 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827\": container with ID starting with cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827 not found: ID does not exist" containerID="cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.755101 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827"} err="failed to get container status \"cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827\": rpc error: code = NotFound desc = could not find container \"cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827\": container with ID starting with cb6dc289abe343da1459f181ecb01d400fff4c8ab25e3c8aec7a04db81502827 not found: ID does not exist" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.755130 4957 scope.go:117] "RemoveContainer" containerID="76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc" Nov 28 22:37:27 crc kubenswrapper[4957]: E1128 22:37:27.755595 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc\": container with ID starting with 76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc not found: ID does not exist" containerID="76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc" Nov 28 22:37:27 crc kubenswrapper[4957]: I1128 22:37:27.755640 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc"} err="failed to get container status \"76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc\": rpc error: code = NotFound desc = could not find container \"76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc\": container with ID starting with 76a8d06987fc9644562119c97ad48f552f648255336708d0ef92068f44c11cfc not found: ID does not exist" Nov 28 22:37:28 crc kubenswrapper[4957]: I1128 22:37:28.840924 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" path="/var/lib/kubelet/pods/99c9ae0c-c01f-4581-89fe-9b1006a25e28/volumes" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.051684 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-api/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.141374 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-evaluator/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.235007 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-listener/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.302374 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b4c649b0-9467-4e39-98b6-54e217030877/aodh-notifier/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.359075 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68cfc84946-tg8qn_68ad8aac-7884-41b1-8f60-68a111f04c11/barbican-api/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.434063 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68cfc84946-tg8qn_68ad8aac-7884-41b1-8f60-68a111f04c11/barbican-api-log/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.548935 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d9bff56f6-ghr8m_cba969ad-04d0-4a30-946d-995723ab4041/barbican-keystone-listener/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.686313 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d9bff56f6-ghr8m_cba969ad-04d0-4a30-946d-995723ab4041/barbican-keystone-listener-log/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.754540 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-768b76c799-g58ls_d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc/barbican-worker/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.791781 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-768b76c799-g58ls_d15ba5a2-dddb-4f0d-ad90-e93c5a0710cc/barbican-worker-log/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.947609 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vtn9l_9d722b72-77bc-4500-89f5-a13bfa49eba1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.991999 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:37:38 crc kubenswrapper[4957]: I1128 22:37:38.992065 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.117816 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/ceilometer-central-agent/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.165654 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/ceilometer-notification-agent/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.205406 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/proxy-httpd/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.267468 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_01b3ddaf-137b-49d1-9d77-0fa9eee151bd/sg-core/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.434908 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7b9e0f02-f5be-405c-8318-834ef79136be/cinder-api-log/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.446043 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7b9e0f02-f5be-405c-8318-834ef79136be/cinder-api/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.605276 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b490ddc2-ebb5-4cba-abea-a76c7e7a5172/cinder-scheduler/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.680089 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b490ddc2-ebb5-4cba-abea-a76c7e7a5172/probe/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.761320 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j7vg9_34a11afa-d26a-4036-8d4e-6dcd96bc3036/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.892622 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-gbxml_853f5e84-3f80-4dd1-99cb-4fb5006f2bf5/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:39 crc kubenswrapper[4957]: I1128 22:37:39.974614 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-g7j2p_228bc1b2-f53c-47ca-9063-2630d3331c8b/init/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.122641 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-g7j2p_228bc1b2-f53c-47ca-9063-2630d3331c8b/init/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.199176 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-g7j2p_228bc1b2-f53c-47ca-9063-2630d3331c8b/dnsmasq-dns/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.237364 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6qqj6_e706336e-29eb-4b07-b29c-bb080c8026be/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.455347 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29dab28a-afdf-4c02-a83a-f43c408b24ee/glance-log/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.487198 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29dab28a-afdf-4c02-a83a-f43c408b24ee/glance-httpd/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.623899 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_34d417c8-e8c3-491b-84e9-0db9f9a10038/glance-log/0.log" Nov 28 22:37:40 crc kubenswrapper[4957]: I1128 22:37:40.664482 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_34d417c8-e8c3-491b-84e9-0db9f9a10038/glance-httpd/0.log" Nov 28 22:37:41 crc kubenswrapper[4957]: I1128 22:37:41.232734 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-8f7b695b5-9dcxn_eecb8bf2-f385-4670-a84c-611a1f373c8f/heat-engine/0.log" Nov 28 22:37:41 crc kubenswrapper[4957]: I1128 22:37:41.440386 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qwxkd_644a4348-cc60-4801-a899-27ba6238dcd1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:41 crc kubenswrapper[4957]: I1128 22:37:41.613106 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-58bdf58698-25xts_25301b86-e61f-4a9e-90e2-b1f1e9c045dc/heat-api/0.log" Nov 28 22:37:41 crc kubenswrapper[4957]: I1128 22:37:41.704289 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4xnhm_52fdcae0-5fdd-48a8-9fe0-6e95ab6e205f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:41 crc kubenswrapper[4957]: I1128 22:37:41.737804 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-768b697649-7gz8m_8fc58381-64db-46f6-9e97-93e8e4c45abe/heat-cfnapi/0.log" Nov 28 22:37:41 crc kubenswrapper[4957]: I1128 22:37:41.942249 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29406121-b8vw6_010a05b8-93e0-4601-9d1d-f865737b9230/keystone-cron/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.076125 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7d861b88-8080-411b-8c34-ae277a73b580/kube-state-metrics/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.210540 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-lpx5l_a745c0d3-586f-4841-a3e4-08c009c85f9b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.317996 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7859c96b89-s4dx8_1ba53d5f-f252-4bd9-b9ac-83b26ffaa9b6/keystone-api/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.344114 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-js78h_80b23341-10eb-4c68-aba7-e36583140466/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.503242 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_448e0773-f22b-417a-a4b4-3434881c628f/mysqld-exporter/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.851475 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7n64j_c14be992-6888-4d40-a63f-8ba6cbc0c837/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:42 crc kubenswrapper[4957]: I1128 22:37:42.902877 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59d88444bf-br9dz_f83754ad-9910-4042-9995-ca4dec9d9a29/neutron-httpd/0.log" Nov 28 22:37:43 crc kubenswrapper[4957]: I1128 22:37:43.088434 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-59d88444bf-br9dz_f83754ad-9910-4042-9995-ca4dec9d9a29/neutron-api/0.log" Nov 28 22:37:43 crc kubenswrapper[4957]: I1128 22:37:43.665761 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_22f8d9b1-ab89-42ad-8872-320873c45110/nova-cell0-conductor-conductor/0.log" Nov 28 22:37:43 crc kubenswrapper[4957]: I1128 22:37:43.751495 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1fd99b25-e3b2-439d-874c-6ae3351f9cea/nova-api-log/0.log" Nov 28 22:37:43 crc kubenswrapper[4957]: I1128 22:37:43.988638 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_33612176-0997-4d00-a797-b5997f0d00c3/nova-cell1-conductor-conductor/0.log" Nov 28 22:37:44 crc kubenswrapper[4957]: I1128 22:37:44.198320 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6bed6daf-51f2-46cc-9512-a24925686b61/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 22:37:44 crc kubenswrapper[4957]: I1128 22:37:44.254345 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rdwmr_d17490c8-1d2b-43d6-aefe-bcbc181d72aa/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:44 crc kubenswrapper[4957]: I1128 22:37:44.470793 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1fd99b25-e3b2-439d-874c-6ae3351f9cea/nova-api-api/0.log" Nov 28 22:37:44 crc kubenswrapper[4957]: I1128 22:37:44.532285 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_27892901-c588-481e-8b3c-363e2128f7d3/nova-metadata-log/0.log" Nov 28 22:37:44 crc kubenswrapper[4957]: I1128 22:37:44.845689 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9b91aacb-b300-41de-814e-26e73ac93c2e/mysql-bootstrap/0.log" Nov 28 22:37:44 crc kubenswrapper[4957]: I1128 22:37:44.947615 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0f526d90-5313-4d45-a9d1-760dbf18440d/nova-scheduler-scheduler/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.037648 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9b91aacb-b300-41de-814e-26e73ac93c2e/galera/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.066964 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9b91aacb-b300-41de-814e-26e73ac93c2e/mysql-bootstrap/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.290939 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d97270c0-f75e-4695-87b5-2c7cfd08bf02/mysql-bootstrap/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.434900 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d97270c0-f75e-4695-87b5-2c7cfd08bf02/mysql-bootstrap/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.466323 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d97270c0-f75e-4695-87b5-2c7cfd08bf02/galera/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.679919 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9ccbd9c5-6f33-4f79-810e-5e9f6d2bc687/openstackclient/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.774620 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dzt8d_6bc9960f-fdff-42fa-8cdd-4ec0d88f359d/ovn-controller/0.log" Nov 28 22:37:45 crc kubenswrapper[4957]: I1128 22:37:45.938172 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2d7lb_a91a39cf-2bad-48a1-9dc7-2309bc652725/openstack-network-exporter/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.096999 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovsdb-server-init/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.283004 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovsdb-server/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.286304 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovsdb-server-init/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.289249 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cd25j_8edb774a-3d8c-4b9f-b9ca-febeb68d14bf/ovs-vswitchd/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.764386 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5k6gg_9938b0a7-21ab-4bb0-b689-6004bce90534/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.962678 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_69f9b12c-31d7-4df2-a4ec-5861c3ad3d76/ovn-northd/0.log" Nov 28 22:37:46 crc kubenswrapper[4957]: I1128 22:37:46.979102 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_69f9b12c-31d7-4df2-a4ec-5861c3ad3d76/openstack-network-exporter/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.072189 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_27892901-c588-481e-8b3c-363e2128f7d3/nova-metadata-metadata/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.166413 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4f513a2d-d752-44ee-b02c-e7f3dcb3945d/ovsdbserver-nb/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.171538 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4f513a2d-d752-44ee-b02c-e7f3dcb3945d/openstack-network-exporter/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.292936 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d137d00-b823-4d67-a158-71e84c6d2c6b/openstack-network-exporter/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.391044 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3d137d00-b823-4d67-a158-71e84c6d2c6b/ovsdbserver-sb/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.703398 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-956cd8448-hm6cs_271d8bc3-c837-4768-b8de-6b185bfa2659/placement-api/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.750830 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-956cd8448-hm6cs_271d8bc3-c837-4768-b8de-6b185bfa2659/placement-log/0.log" Nov 28 22:37:47 crc kubenswrapper[4957]: I1128 22:37:47.782576 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/init-config-reloader/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.020737 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/config-reloader/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.024253 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/init-config-reloader/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.059930 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/prometheus/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.060111 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_fc4dd4fb-4706-4212-bfc5-84029b567248/thanos-sidecar/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.224255 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b6a2345-f928-41e0-bb0d-efd6ca576e42/setup-container/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.508123 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b6a2345-f928-41e0-bb0d-efd6ca576e42/rabbitmq/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.526771 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7b6a2345-f928-41e0-bb0d-efd6ca576e42/setup-container/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.526821 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_39bd199d-d600-4b4a-9d31-831e346ea98d/setup-container/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.715361 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_39bd199d-d600-4b4a-9d31-831e346ea98d/setup-container/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.783279 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_39bd199d-d600-4b4a-9d31-831e346ea98d/rabbitmq/0.log" Nov 28 22:37:48 crc kubenswrapper[4957]: I1128 22:37:48.801569 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v2hz9_be3139dd-9ebc-4678-abba-2217f17f76c1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.008896 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d6v7s_38c81f98-baf0-45aa-a33e-566697f7673c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.070019 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-p9hrp_1499e3ce-e9eb-4774-9f22-fbac5300742b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.269149 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-s4mhd_e92f8c7f-4fd3-4ece-963f-3e904a5057bf/ssh-known-hosts-edpm-deployment/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.273031 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lq4r6_517e3d64-b818-4eea-a010-1237b735c5e2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.542656 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6598dd477f-t4jws_d3417d80-e650-4833-b935-a4cbacf23212/proxy-server/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.750713 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6598dd477f-t4jws_d3417d80-e650-4833-b935-a4cbacf23212/proxy-httpd/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.755270 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bzbx8_93cfcc7a-cedc-4adc-abb3-eef0aec22ae7/swift-ring-rebalance/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.921110 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-auditor/0.log" Nov 28 22:37:49 crc kubenswrapper[4957]: I1128 22:37:49.979251 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-reaper/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.064920 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-replicator/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.100561 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/account-server/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.153258 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-auditor/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.206540 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-replicator/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.275131 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-server/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.337107 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/container-updater/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.364502 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-auditor/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.430877 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-expirer/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.530918 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-replicator/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.546142 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-server/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.605114 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/object-updater/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.624835 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/rsync/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.742982 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb21c56-7bea-44f0-8dc6-b50a18a7cbd5/swift-recon-cron/0.log" Nov 28 22:37:50 crc kubenswrapper[4957]: I1128 22:37:50.930580 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-6jxb7_dbbcbcc2-4eaf-449f-bb83-c56566fd9a2d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:51 crc kubenswrapper[4957]: I1128 22:37:51.000907 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-dm7vd_52615b47-f32d-4e3a-a0a0-dc23c7bc7677/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:51 crc kubenswrapper[4957]: I1128 22:37:51.229764 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c48c0f07-ec0e-4f0e-9c97-b47fe68e5cdb/test-operator-logs-container/0.log" Nov 28 22:37:51 crc kubenswrapper[4957]: I1128 22:37:51.419820 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-j6lfh_c9b89b14-7e55-48b4-bbd9-5c67ed879847/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 22:37:52 crc kubenswrapper[4957]: I1128 22:37:52.037943 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d0047755-5ddc-48c8-a4eb-4bf540cb695f/tempest-tests-tempest-tests-runner/0.log" Nov 28 22:37:57 crc kubenswrapper[4957]: I1128 22:37:57.164613 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_179be6ed-b240-4fde-995c-92c72dbd2b02/memcached/0.log" Nov 28 22:38:08 crc kubenswrapper[4957]: I1128 22:38:08.992494 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:38:08 crc kubenswrapper[4957]: I1128 22:38:08.993058 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:38:08 crc kubenswrapper[4957]: I1128 22:38:08.993111 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:38:08 crc kubenswrapper[4957]: I1128 22:38:08.993982 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ac70cd5de2ed775b4637d8cd0e97b2bfbea648df82113649dcc60c5cda7ca51"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:38:08 crc kubenswrapper[4957]: I1128 22:38:08.994034 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://6ac70cd5de2ed775b4637d8cd0e97b2bfbea648df82113649dcc60c5cda7ca51" gracePeriod=600 Nov 28 22:38:10 crc kubenswrapper[4957]: I1128 22:38:10.107142 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="6ac70cd5de2ed775b4637d8cd0e97b2bfbea648df82113649dcc60c5cda7ca51" exitCode=0 Nov 28 22:38:10 crc kubenswrapper[4957]: I1128 22:38:10.107276 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"6ac70cd5de2ed775b4637d8cd0e97b2bfbea648df82113649dcc60c5cda7ca51"} Nov 28 22:38:10 crc kubenswrapper[4957]: I1128 22:38:10.107724 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerStarted","Data":"d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786"} Nov 28 22:38:10 crc kubenswrapper[4957]: I1128 22:38:10.107752 4957 scope.go:117] "RemoveContainer" containerID="fad6e527856d5f02ee52ed99313270d41acd8037366fc26c89c9b48ab4387228" Nov 28 22:38:17 crc kubenswrapper[4957]: I1128 22:38:17.609142 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/util/0.log" Nov 28 22:38:17 crc kubenswrapper[4957]: I1128 22:38:17.743590 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/util/0.log" Nov 28 22:38:17 crc kubenswrapper[4957]: I1128 22:38:17.797599 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/pull/0.log" Nov 28 22:38:17 crc kubenswrapper[4957]: I1128 22:38:17.798377 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/pull/0.log" Nov 28 22:38:17 crc kubenswrapper[4957]: I1128 22:38:17.989573 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/util/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.001895 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/pull/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.019016 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_468c1677a59431dfffd73cd201b0804c9e50972b4c84a01d502eb73dfesqxzq_ac62e91b-26c6-4dac-bba1-54f4e46ff61e/extract/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.172701 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-sfgm2_12484928-2fe4-4bd6-bac2-e0f2e48829fe/kube-rbac-proxy/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.260813 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-sfgm2_12484928-2fe4-4bd6-bac2-e0f2e48829fe/manager/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.287951 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-s85m7_1a5138b3-6b84-43b0-bdc9-f867a83f4bc7/kube-rbac-proxy/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.428936 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-s85m7_1a5138b3-6b84-43b0-bdc9-f867a83f4bc7/manager/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.477702 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kbhl9_442226e4-b2b8-41c8-9278-2845b2fff0aa/kube-rbac-proxy/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.480483 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kbhl9_442226e4-b2b8-41c8-9278-2845b2fff0aa/manager/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.675300 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-8t4fj_c330a33e-ec13-4ec0-869b-4847b9385d5d/kube-rbac-proxy/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.756158 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-8t4fj_c330a33e-ec13-4ec0-869b-4847b9385d5d/manager/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.847862 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v6427_d50c67da-27ca-4ab9-bf83-b2275ff3d801/kube-rbac-proxy/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.978609 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v6427_d50c67da-27ca-4ab9-bf83-b2275ff3d801/manager/0.log" Nov 28 22:38:18 crc kubenswrapper[4957]: I1128 22:38:18.992467 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xqjj5_8eac7f46-0beb-4f3f-a530-2fed527b6383/kube-rbac-proxy/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.082746 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xqjj5_8eac7f46-0beb-4f3f-a530-2fed527b6383/manager/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.194182 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-ccmt8_96a751a3-4af7-4cb8-b12b-46e0d177b6f3/kube-rbac-proxy/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.428151 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-ccmt8_96a751a3-4af7-4cb8-b12b-46e0d177b6f3/manager/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.457133 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-8wqx7_c2cca951-4ada-44ec-ab43-a1f69ee7f7cb/kube-rbac-proxy/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.478156 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-8wqx7_c2cca951-4ada-44ec-ab43-a1f69ee7f7cb/manager/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.611535 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-47tjl_a8962e83-cc90-4844-9bca-96e85cf789bd/kube-rbac-proxy/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.748159 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-47tjl_a8962e83-cc90-4844-9bca-96e85cf789bd/manager/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.825045 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-2n5cx_34faaa98-3568-4478-b968-b9cbe87c77f3/kube-rbac-proxy/0.log" Nov 28 22:38:19 crc kubenswrapper[4957]: I1128 22:38:19.840924 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-2n5cx_34faaa98-3568-4478-b968-b9cbe87c77f3/manager/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.142823 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-d6twj_f510519a-6187-47f8-875e-3e9a5537c364/kube-rbac-proxy/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.241272 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-d6twj_f510519a-6187-47f8-875e-3e9a5537c364/manager/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.283186 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ln4j9_47f33b35-a8d3-4981-8001-47b906a33fa6/kube-rbac-proxy/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.381136 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ln4j9_47f33b35-a8d3-4981-8001-47b906a33fa6/manager/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.435549 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-bn4dd_c59777ed-7790-45bc-972a-f9fbe8fbccf4/kube-rbac-proxy/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.555787 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-bn4dd_c59777ed-7790-45bc-972a-f9fbe8fbccf4/manager/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.627230 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-npt5l_844d1842-4247-4b95-8cca-1785d3ed80b8/manager/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.659971 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-npt5l_844d1842-4247-4b95-8cca-1785d3ed80b8/kube-rbac-proxy/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.799701 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v_15b01ca6-83c4-47da-bd82-8b5c4a177561/kube-rbac-proxy/0.log" Nov 28 22:38:20 crc kubenswrapper[4957]: I1128 22:38:20.856553 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fjp8v_15b01ca6-83c4-47da-bd82-8b5c4a177561/manager/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.229399 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-567f7c7dd7-9wckn_917e81d1-a7a3-431f-9b6f-511334a57f50/operator/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.263964 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gnq58_f76be402-2871-4e82-8c2a-8cc359b8c889/registry-server/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.453882 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-cnzv4_499b2d8c-a27a-46f1-9f38-8b29ab905da7/kube-rbac-proxy/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.649767 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-cnzv4_499b2d8c-a27a-46f1-9f38-8b29ab905da7/manager/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.763696 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-2w9h7_02e155d2-76c6-4fca-b013-6c2dcf607cdb/kube-rbac-proxy/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.776328 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-2w9h7_02e155d2-76c6-4fca-b013-6c2dcf607cdb/manager/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.924724 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-cfdjt_7a4dc310-e5f8-4a6f-8c8b-94a7faca596d/operator/0.log" Nov 28 22:38:21 crc kubenswrapper[4957]: I1128 22:38:21.990359 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-nq5h8_b8066278-4583-4fe3-aed6-93543482ab1e/kube-rbac-proxy/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.153548 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-nq5h8_b8066278-4583-4fe3-aed6-93543482ab1e/manager/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.251587 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f6754bd54-dbj68_a3a9a0f3-6f26-4174-973d-049a1b8a2573/kube-rbac-proxy/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.339007 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fb8944fcb-x9n55_aaaab82e-6456-4b20-9d92-f19458df9948/manager/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.429416 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-v56f9_554f334d-cef4-48f9-bb57-03261844fbde/kube-rbac-proxy/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.470555 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-v56f9_554f334d-cef4-48f9-bb57-03261844fbde/manager/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.530367 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f6754bd54-dbj68_a3a9a0f3-6f26-4174-973d-049a1b8a2573/manager/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.613395 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qshzq_3d6f1d41-eaa5-4258-906c-5894ac698e5b/kube-rbac-proxy/0.log" Nov 28 22:38:22 crc kubenswrapper[4957]: I1128 22:38:22.638587 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-qshzq_3d6f1d41-eaa5-4258-906c-5894ac698e5b/manager/0.log" Nov 28 22:38:40 crc kubenswrapper[4957]: I1128 22:38:40.851608 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pj8vl_27a7baa1-a66c-4c13-be52-2a401578c92d/control-plane-machine-set-operator/0.log" Nov 28 22:38:40 crc kubenswrapper[4957]: I1128 22:38:40.999187 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f229v_b36a4b12-b069-4dc4-a503-936aae20d06e/kube-rbac-proxy/0.log" Nov 28 22:38:41 crc kubenswrapper[4957]: I1128 22:38:41.014616 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f229v_b36a4b12-b069-4dc4-a503-936aae20d06e/machine-api-operator/0.log" Nov 28 22:38:52 crc kubenswrapper[4957]: I1128 22:38:52.084024 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-88wlc_e13a6b33-f471-46db-b7f2-98600799eaef/cert-manager-controller/0.log" Nov 28 22:38:52 crc kubenswrapper[4957]: I1128 22:38:52.280468 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8kfk7_228af258-a007-4de0-922b-f434bc1e665b/cert-manager-cainjector/0.log" Nov 28 22:38:52 crc kubenswrapper[4957]: I1128 22:38:52.322465 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-gjw6l_bd388a87-03fe-4f7f-b36a-a89a8d110806/cert-manager-webhook/0.log" Nov 28 22:39:04 crc kubenswrapper[4957]: I1128 22:39:04.113295 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-8fhk9_c781e919-f546-4bd7-b564-cd424540268c/nmstate-console-plugin/0.log" Nov 28 22:39:04 crc kubenswrapper[4957]: I1128 22:39:04.228165 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4fhg7_f9d6e935-f7a3-4a37-8d21-5bb73ef04186/nmstate-handler/0.log" Nov 28 22:39:04 crc kubenswrapper[4957]: I1128 22:39:04.337924 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-8gmh8_b878fdac-b49b-40d8-b0cb-af5d44f21f9d/kube-rbac-proxy/0.log" Nov 28 22:39:04 crc kubenswrapper[4957]: I1128 22:39:04.342362 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-8gmh8_b878fdac-b49b-40d8-b0cb-af5d44f21f9d/nmstate-metrics/0.log" Nov 28 22:39:04 crc kubenswrapper[4957]: I1128 22:39:04.513064 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-fzfdw_66d17f19-66f9-41c9-9566-fca688da8506/nmstate-operator/0.log" Nov 28 22:39:04 crc kubenswrapper[4957]: I1128 22:39:04.575610 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-t9nrr_9234d52f-6818-4ccf-ac79-4d5e4f3cce21/nmstate-webhook/0.log" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.228456 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c5pnl"] Nov 28 22:39:05 crc kubenswrapper[4957]: E1128 22:39:05.228927 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="extract-utilities" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.228941 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="extract-utilities" Nov 28 22:39:05 crc kubenswrapper[4957]: E1128 22:39:05.228955 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="extract-content" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.228961 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="extract-content" Nov 28 22:39:05 crc kubenswrapper[4957]: E1128 22:39:05.228975 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="registry-server" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.228981 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="registry-server" Nov 28 22:39:05 crc kubenswrapper[4957]: E1128 22:39:05.228991 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73f35f2-4d06-40ff-8746-a4071b243700" containerName="container-00" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.228997 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73f35f2-4d06-40ff-8746-a4071b243700" containerName="container-00" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.229280 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="d73f35f2-4d06-40ff-8746-a4071b243700" containerName="container-00" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.229308 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c9ae0c-c01f-4581-89fe-9b1006a25e28" containerName="registry-server" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.233285 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.244487 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5pnl"] Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.393871 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-catalog-content\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.394582 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbw84\" (UniqueName: \"kubernetes.io/projected/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-kube-api-access-zbw84\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.394687 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-utilities\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.497144 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbw84\" (UniqueName: \"kubernetes.io/projected/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-kube-api-access-zbw84\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.497218 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-utilities\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.497263 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-catalog-content\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.497835 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-utilities\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.497903 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-catalog-content\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.516835 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbw84\" (UniqueName: \"kubernetes.io/projected/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-kube-api-access-zbw84\") pod \"community-operators-c5pnl\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:05 crc kubenswrapper[4957]: I1128 22:39:05.553032 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:06 crc kubenswrapper[4957]: I1128 22:39:06.044063 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5pnl"] Nov 28 22:39:06 crc kubenswrapper[4957]: I1128 22:39:06.765225 4957 generic.go:334] "Generic (PLEG): container finished" podID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerID="71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56" exitCode=0 Nov 28 22:39:06 crc kubenswrapper[4957]: I1128 22:39:06.765284 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerDied","Data":"71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56"} Nov 28 22:39:06 crc kubenswrapper[4957]: I1128 22:39:06.765733 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerStarted","Data":"a77b2025f57fe301223c2f5586b4e0682a24cf8fc3e54bc5b8c6aeffe3f3a169"} Nov 28 22:39:06 crc kubenswrapper[4957]: I1128 22:39:06.769060 4957 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 22:39:07 crc kubenswrapper[4957]: I1128 22:39:07.779433 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerStarted","Data":"058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8"} Nov 28 22:39:08 crc kubenswrapper[4957]: I1128 22:39:08.800815 4957 generic.go:334] "Generic (PLEG): container finished" podID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerID="058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8" exitCode=0 Nov 28 22:39:08 crc kubenswrapper[4957]: I1128 22:39:08.800921 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerDied","Data":"058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8"} Nov 28 22:39:09 crc kubenswrapper[4957]: I1128 22:39:09.814845 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerStarted","Data":"88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1"} Nov 28 22:39:09 crc kubenswrapper[4957]: I1128 22:39:09.843968 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c5pnl" podStartSLOduration=2.276761581 podStartE2EDuration="4.843938767s" podCreationTimestamp="2025-11-28 22:39:05 +0000 UTC" firstStartedPulling="2025-11-28 22:39:06.768807202 +0000 UTC m=+6586.237455111" lastFinishedPulling="2025-11-28 22:39:09.335984398 +0000 UTC m=+6588.804632297" observedRunningTime="2025-11-28 22:39:09.833239753 +0000 UTC m=+6589.301887652" watchObservedRunningTime="2025-11-28 22:39:09.843938767 +0000 UTC m=+6589.312586716" Nov 28 22:39:15 crc kubenswrapper[4957]: I1128 22:39:15.553422 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:15 crc kubenswrapper[4957]: I1128 22:39:15.554484 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:15 crc kubenswrapper[4957]: I1128 22:39:15.650457 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:15 crc kubenswrapper[4957]: I1128 22:39:15.933908 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:15 crc kubenswrapper[4957]: I1128 22:39:15.994428 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5pnl"] Nov 28 22:39:17 crc kubenswrapper[4957]: I1128 22:39:17.042201 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/kube-rbac-proxy/0.log" Nov 28 22:39:17 crc kubenswrapper[4957]: I1128 22:39:17.103599 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/manager/0.log" Nov 28 22:39:17 crc kubenswrapper[4957]: I1128 22:39:17.910159 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c5pnl" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="registry-server" containerID="cri-o://88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1" gracePeriod=2 Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.474343 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.636263 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbw84\" (UniqueName: \"kubernetes.io/projected/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-kube-api-access-zbw84\") pod \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.636930 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-catalog-content\") pod \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.637145 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-utilities\") pod \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\" (UID: \"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2\") " Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.637852 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-utilities" (OuterVolumeSpecName: "utilities") pod "19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" (UID: "19eaefe9-aafa-463d-95b1-d3ff70ee5ea2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.638507 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.646138 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-kube-api-access-zbw84" (OuterVolumeSpecName: "kube-api-access-zbw84") pod "19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" (UID: "19eaefe9-aafa-463d-95b1-d3ff70ee5ea2"). InnerVolumeSpecName "kube-api-access-zbw84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.683182 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" (UID: "19eaefe9-aafa-463d-95b1-d3ff70ee5ea2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.741025 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbw84\" (UniqueName: \"kubernetes.io/projected/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-kube-api-access-zbw84\") on node \"crc\" DevicePath \"\"" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.741088 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.924979 4957 generic.go:334] "Generic (PLEG): container finished" podID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerID="88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1" exitCode=0 Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.925024 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerDied","Data":"88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1"} Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.925052 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pnl" event={"ID":"19eaefe9-aafa-463d-95b1-d3ff70ee5ea2","Type":"ContainerDied","Data":"a77b2025f57fe301223c2f5586b4e0682a24cf8fc3e54bc5b8c6aeffe3f3a169"} Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.925050 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pnl" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.925068 4957 scope.go:117] "RemoveContainer" containerID="88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.949248 4957 scope.go:117] "RemoveContainer" containerID="058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8" Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.956053 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5pnl"] Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.967945 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c5pnl"] Nov 28 22:39:18 crc kubenswrapper[4957]: I1128 22:39:18.974980 4957 scope.go:117] "RemoveContainer" containerID="71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56" Nov 28 22:39:19 crc kubenswrapper[4957]: I1128 22:39:19.025327 4957 scope.go:117] "RemoveContainer" containerID="88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1" Nov 28 22:39:19 crc kubenswrapper[4957]: E1128 22:39:19.025977 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1\": container with ID starting with 88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1 not found: ID does not exist" containerID="88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1" Nov 28 22:39:19 crc kubenswrapper[4957]: I1128 22:39:19.026041 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1"} err="failed to get container status \"88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1\": rpc error: code = NotFound desc = could not find container \"88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1\": container with ID starting with 88ce3ad563c65813659c823b6005ea950a1a6cb4d583860153b5b1b25af174d1 not found: ID does not exist" Nov 28 22:39:19 crc kubenswrapper[4957]: I1128 22:39:19.026086 4957 scope.go:117] "RemoveContainer" containerID="058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8" Nov 28 22:39:19 crc kubenswrapper[4957]: E1128 22:39:19.026789 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8\": container with ID starting with 058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8 not found: ID does not exist" containerID="058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8" Nov 28 22:39:19 crc kubenswrapper[4957]: I1128 22:39:19.026869 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8"} err="failed to get container status \"058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8\": rpc error: code = NotFound desc = could not find container \"058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8\": container with ID starting with 058e30504a8054587391523ad3bd0877c621943768fa43e52f28c25da711d7a8 not found: ID does not exist" Nov 28 22:39:19 crc kubenswrapper[4957]: I1128 22:39:19.026930 4957 scope.go:117] "RemoveContainer" containerID="71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56" Nov 28 22:39:19 crc kubenswrapper[4957]: E1128 22:39:19.028052 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56\": container with ID starting with 71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56 not found: ID does not exist" containerID="71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56" Nov 28 22:39:19 crc kubenswrapper[4957]: I1128 22:39:19.028086 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56"} err="failed to get container status \"71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56\": rpc error: code = NotFound desc = could not find container \"71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56\": container with ID starting with 71938d920dbfa035735be93b36800a27395d2a0a8076a9260fad4af0e5c2fc56 not found: ID does not exist" Nov 28 22:39:20 crc kubenswrapper[4957]: I1128 22:39:20.837968 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" path="/var/lib/kubelet/pods/19eaefe9-aafa-463d-95b1-d3ff70ee5ea2/volumes" Nov 28 22:39:29 crc kubenswrapper[4957]: I1128 22:39:29.817994 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-bw9vg_65a613a8-4720-4ef2-be4d-dceeee3ce44e/cluster-logging-operator/0.log" Nov 28 22:39:29 crc kubenswrapper[4957]: I1128 22:39:29.980500 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-zvxsk_827dd4f4-1c01-43ba-b1d8-d5c774a45d46/collector/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.051468 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_b7f3cf26-8b94-4ed1-a709-f5a5a36b40b1/loki-compactor/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.168347 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-4r6v6_67552d33-b77e-41cc-8233-16009aa347ca/loki-distributor/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.247562 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-gtfb8_0ae348b1-f460-4971-bd3f-6832a96d1f70/gateway/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.292241 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-gtfb8_0ae348b1-f460-4971-bd3f-6832a96d1f70/opa/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.394788 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-p9fzr_e2dbe144-6748-4562-888e-ac850bb6c0b4/gateway/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.417912 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9667d547d-p9fzr_e2dbe144-6748-4562-888e-ac850bb6c0b4/opa/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.562363 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_cbbe211e-0fa5-42f6-830e-4feb479b2b58/loki-index-gateway/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.697383 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_3866c99e-7b87-4a00-9df5-b121467d603e/loki-ingester/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.769011 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-2cmml_c90dbf3d-fe84-45cc-baca-5cbc545bbb53/loki-querier/0.log" Nov 28 22:39:30 crc kubenswrapper[4957]: I1128 22:39:30.886873 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-6rjgr_ba532acb-af97-43b9-b61b-e54721951c1a/loki-query-frontend/0.log" Nov 28 22:39:43 crc kubenswrapper[4957]: I1128 22:39:43.897617 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-mtzhn_505298d8-01d1-4918-8329-04c935f6a8a0/kube-rbac-proxy/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.065054 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-mtzhn_505298d8-01d1-4918-8329-04c935f6a8a0/controller/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.090371 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.287182 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.287553 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.306505 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.309629 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.504101 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.544705 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.548072 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.548735 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.724806 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-frr-files/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.733736 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-reloader/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.736581 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/cp-metrics/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.761128 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/controller/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.891649 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/frr-metrics/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.912926 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/kube-rbac-proxy/0.log" Nov 28 22:39:44 crc kubenswrapper[4957]: I1128 22:39:44.978489 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/kube-rbac-proxy-frr/0.log" Nov 28 22:39:45 crc kubenswrapper[4957]: I1128 22:39:45.134966 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/reloader/0.log" Nov 28 22:39:45 crc kubenswrapper[4957]: I1128 22:39:45.233055 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-zqgbd_58bfcebd-2036-46ce-8b59-d47e2b138c2f/frr-k8s-webhook-server/0.log" Nov 28 22:39:45 crc kubenswrapper[4957]: I1128 22:39:45.398775 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7cdb7495d5-qqgdt_aa98f27d-5bda-41a4-bd59-1dff81ae7a65/manager/0.log" Nov 28 22:39:45 crc kubenswrapper[4957]: I1128 22:39:45.556790 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-66c879d448-sm6v6_b0fbca5a-3b56-4822-9a82-5ec342b6b89a/webhook-server/0.log" Nov 28 22:39:45 crc kubenswrapper[4957]: I1128 22:39:45.772165 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dn826_df95d986-54c6-4e37-87f7-6775e4c24d4f/kube-rbac-proxy/0.log" Nov 28 22:39:46 crc kubenswrapper[4957]: I1128 22:39:46.327713 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dn826_df95d986-54c6-4e37-87f7-6775e4c24d4f/speaker/0.log" Nov 28 22:39:46 crc kubenswrapper[4957]: I1128 22:39:46.893079 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9v6v_94df2aa8-79ae-408f-af72-a1f5ee3a05f2/frr/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.218809 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/util/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.417342 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/util/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.420758 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/pull/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.426199 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/pull/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.595081 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/extract/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.612948 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/pull/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.645849 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8dfpxp_758a064a-dbeb-49f3-b1d0-d7fdde81002b/util/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.767034 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/util/0.log" Nov 28 22:39:58 crc kubenswrapper[4957]: I1128 22:39:58.964675 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/pull/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.002643 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/util/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.016016 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/pull/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.165907 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/util/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.181732 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/extract/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.187246 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fv828c_ffa7bb6e-8f47-46e9-92e6-0669f49584f9/pull/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.334779 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/util/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.510072 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/util/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.530284 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/pull/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.547637 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/pull/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.702633 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/util/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.715791 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/pull/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.730717 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bw9cv_9180c900-b668-4bb3-89b2-8b6018f6de18/extract/0.log" Nov 28 22:39:59 crc kubenswrapper[4957]: I1128 22:39:59.854586 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/util/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.024351 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/pull/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.030941 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/pull/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.079926 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/util/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.233820 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/util/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.242988 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/pull/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.268413 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fhg74d_0591b6e5-8805-4fd5-b1da-1d132f3a0e94/extract/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.396734 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/util/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.544045 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/pull/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.569529 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/util/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.583625 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/pull/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.729117 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/util/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.752095 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/pull/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.756981 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834t4jp_7681936e-c73f-4b09-b146-53988a18a40b/extract/0.log" Nov 28 22:40:00 crc kubenswrapper[4957]: I1128 22:40:00.906182 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-utilities/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.112769 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-utilities/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.113267 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-content/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.141799 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-content/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.332998 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-utilities/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.390543 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/extract-content/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.541956 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-utilities/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.748355 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-utilities/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.784235 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-content/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.861399 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-content/0.log" Nov 28 22:40:01 crc kubenswrapper[4957]: I1128 22:40:01.964777 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b47dz_c14378db-11fd-4aa8-ad95-c9531993160a/registry-server/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.007796 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-content/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.226231 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/extract-utilities/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.371481 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rv2ws_01c31d76-bda9-44e6-b62a-04a154eeae84/marketplace-operator/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.493145 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-utilities/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.739252 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-utilities/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.758907 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-content/0.log" Nov 28 22:40:02 crc kubenswrapper[4957]: I1128 22:40:02.771546 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-content/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.020331 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-utilities/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.088440 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/extract-content/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.278064 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8kmxc_564f67f5-ceaa-4b51-bb95-289d69ab2bdf/registry-server/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.303592 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-utilities/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.400319 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lrgtn_7d71eea9-30f9-4091-acf2-c7e6e5890b30/registry-server/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.475279 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-utilities/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.484605 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-content/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.509501 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-content/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.645384 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-content/0.log" Nov 28 22:40:03 crc kubenswrapper[4957]: I1128 22:40:03.707930 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/extract-utilities/0.log" Nov 28 22:40:04 crc kubenswrapper[4957]: I1128 22:40:04.327405 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hp8gk_f9d7934f-40b4-4156-b9c4-645229f18296/registry-server/0.log" Nov 28 22:40:16 crc kubenswrapper[4957]: I1128 22:40:16.278285 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-v86zm_cda43354-6472-4023-914d-dde633218f08/prometheus-operator/0.log" Nov 28 22:40:16 crc kubenswrapper[4957]: I1128 22:40:16.428110 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ccbc4bc97-rrt5d_6439437b-8d36-450e-87e0-9b394b0aa987/prometheus-operator-admission-webhook/0.log" Nov 28 22:40:16 crc kubenswrapper[4957]: I1128 22:40:16.525872 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ccbc4bc97-vddx8_a78ee796-8b40-4db0-9834-a4d66c77f95a/prometheus-operator-admission-webhook/0.log" Nov 28 22:40:16 crc kubenswrapper[4957]: I1128 22:40:16.674328 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-sbrrc_d0c7dd57-36ce-4b74-b2f3-4a9dbaa3fa0b/operator/0.log" Nov 28 22:40:16 crc kubenswrapper[4957]: I1128 22:40:16.796916 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-bhd2d_72d57747-268e-40db-85cc-98d5ed48a55f/observability-ui-dashboards/0.log" Nov 28 22:40:16 crc kubenswrapper[4957]: I1128 22:40:16.912261 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-jbfht_2841a3ed-5cfc-4a7b-a2bd-a3536018850f/perses-operator/0.log" Nov 28 22:40:29 crc kubenswrapper[4957]: I1128 22:40:29.385107 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/manager/0.log" Nov 28 22:40:29 crc kubenswrapper[4957]: I1128 22:40:29.400504 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69579bc464-22g8x_9ea32e2a-3b67-44d4-a881-32a968981c1c/kube-rbac-proxy/0.log" Nov 28 22:40:38 crc kubenswrapper[4957]: I1128 22:40:38.992959 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:40:38 crc kubenswrapper[4957]: I1128 22:40:38.993514 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:41:08 crc kubenswrapper[4957]: I1128 22:41:08.992973 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:41:08 crc kubenswrapper[4957]: I1128 22:41:08.993881 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:41:38 crc kubenswrapper[4957]: I1128 22:41:38.992724 4957 patch_prober.go:28] interesting pod/machine-config-daemon-hq5x2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 22:41:38 crc kubenswrapper[4957]: I1128 22:41:38.993288 4957 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 22:41:38 crc kubenswrapper[4957]: I1128 22:41:38.993331 4957 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" Nov 28 22:41:38 crc kubenswrapper[4957]: I1128 22:41:38.994199 4957 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786"} pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 22:41:38 crc kubenswrapper[4957]: I1128 22:41:38.994271 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerName="machine-config-daemon" containerID="cri-o://d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" gracePeriod=600 Nov 28 22:41:39 crc kubenswrapper[4957]: E1128 22:41:39.120197 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:41:39 crc kubenswrapper[4957]: I1128 22:41:39.450288 4957 generic.go:334] "Generic (PLEG): container finished" podID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" exitCode=0 Nov 28 22:41:39 crc kubenswrapper[4957]: I1128 22:41:39.450348 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" event={"ID":"8d41c2ca-d1ca-46b0-be19-6e4693f0b827","Type":"ContainerDied","Data":"d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786"} Nov 28 22:41:39 crc kubenswrapper[4957]: I1128 22:41:39.450728 4957 scope.go:117] "RemoveContainer" containerID="6ac70cd5de2ed775b4637d8cd0e97b2bfbea648df82113649dcc60c5cda7ca51" Nov 28 22:41:39 crc kubenswrapper[4957]: I1128 22:41:39.451596 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:41:39 crc kubenswrapper[4957]: E1128 22:41:39.452117 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.713485 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-flxkc"] Nov 28 22:41:47 crc kubenswrapper[4957]: E1128 22:41:47.714787 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="extract-utilities" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.714808 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="extract-utilities" Nov 28 22:41:47 crc kubenswrapper[4957]: E1128 22:41:47.714817 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="registry-server" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.714824 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="registry-server" Nov 28 22:41:47 crc kubenswrapper[4957]: E1128 22:41:47.714884 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="extract-content" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.714893 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="extract-content" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.715157 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="19eaefe9-aafa-463d-95b1-d3ff70ee5ea2" containerName="registry-server" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.717040 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.725670 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-flxkc"] Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.758572 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tgz\" (UniqueName: \"kubernetes.io/projected/77cf7304-3e09-4710-bb1a-b5c1cba21af6-kube-api-access-w9tgz\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.758812 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-utilities\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.758921 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-catalog-content\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.861632 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9tgz\" (UniqueName: \"kubernetes.io/projected/77cf7304-3e09-4710-bb1a-b5c1cba21af6-kube-api-access-w9tgz\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.861839 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-utilities\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.861939 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-catalog-content\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.862450 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-catalog-content\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.863886 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-utilities\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:47 crc kubenswrapper[4957]: I1128 22:41:47.885446 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9tgz\" (UniqueName: \"kubernetes.io/projected/77cf7304-3e09-4710-bb1a-b5c1cba21af6-kube-api-access-w9tgz\") pod \"certified-operators-flxkc\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:48 crc kubenswrapper[4957]: I1128 22:41:48.042172 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:48 crc kubenswrapper[4957]: I1128 22:41:48.583686 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-flxkc"] Nov 28 22:41:49 crc kubenswrapper[4957]: I1128 22:41:49.555803 4957 generic.go:334] "Generic (PLEG): container finished" podID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerID="a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624" exitCode=0 Nov 28 22:41:49 crc kubenswrapper[4957]: I1128 22:41:49.555897 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerDied","Data":"a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624"} Nov 28 22:41:49 crc kubenswrapper[4957]: I1128 22:41:49.556262 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerStarted","Data":"bd7f01f23bed30fa02db2f8bc7a2d9508a9856b060ef470acef65ecdc6ab95de"} Nov 28 22:41:50 crc kubenswrapper[4957]: I1128 22:41:50.585585 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerStarted","Data":"0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303"} Nov 28 22:41:51 crc kubenswrapper[4957]: I1128 22:41:51.598944 4957 generic.go:334] "Generic (PLEG): container finished" podID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerID="0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303" exitCode=0 Nov 28 22:41:51 crc kubenswrapper[4957]: I1128 22:41:51.599011 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerDied","Data":"0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303"} Nov 28 22:41:52 crc kubenswrapper[4957]: I1128 22:41:52.610703 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerStarted","Data":"12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4"} Nov 28 22:41:52 crc kubenswrapper[4957]: I1128 22:41:52.640933 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-flxkc" podStartSLOduration=3.106201501 podStartE2EDuration="5.640911167s" podCreationTimestamp="2025-11-28 22:41:47 +0000 UTC" firstStartedPulling="2025-11-28 22:41:49.558575214 +0000 UTC m=+6749.027223123" lastFinishedPulling="2025-11-28 22:41:52.09328488 +0000 UTC m=+6751.561932789" observedRunningTime="2025-11-28 22:41:52.62966273 +0000 UTC m=+6752.098310639" watchObservedRunningTime="2025-11-28 22:41:52.640911167 +0000 UTC m=+6752.109559076" Nov 28 22:41:52 crc kubenswrapper[4957]: I1128 22:41:52.817885 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:41:52 crc kubenswrapper[4957]: E1128 22:41:52.818753 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:41:58 crc kubenswrapper[4957]: I1128 22:41:58.042717 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:58 crc kubenswrapper[4957]: I1128 22:41:58.043567 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:58 crc kubenswrapper[4957]: I1128 22:41:58.092341 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:58 crc kubenswrapper[4957]: I1128 22:41:58.765579 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:41:59 crc kubenswrapper[4957]: I1128 22:41:59.853162 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-flxkc"] Nov 28 22:42:00 crc kubenswrapper[4957]: I1128 22:42:00.713328 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-flxkc" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="registry-server" containerID="cri-o://12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4" gracePeriod=2 Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.263139 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.294404 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-catalog-content\") pod \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.294596 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9tgz\" (UniqueName: \"kubernetes.io/projected/77cf7304-3e09-4710-bb1a-b5c1cba21af6-kube-api-access-w9tgz\") pod \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.294629 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-utilities\") pod \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\" (UID: \"77cf7304-3e09-4710-bb1a-b5c1cba21af6\") " Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.295436 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-utilities" (OuterVolumeSpecName: "utilities") pod "77cf7304-3e09-4710-bb1a-b5c1cba21af6" (UID: "77cf7304-3e09-4710-bb1a-b5c1cba21af6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.300928 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77cf7304-3e09-4710-bb1a-b5c1cba21af6-kube-api-access-w9tgz" (OuterVolumeSpecName: "kube-api-access-w9tgz") pod "77cf7304-3e09-4710-bb1a-b5c1cba21af6" (UID: "77cf7304-3e09-4710-bb1a-b5c1cba21af6"). InnerVolumeSpecName "kube-api-access-w9tgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.341157 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77cf7304-3e09-4710-bb1a-b5c1cba21af6" (UID: "77cf7304-3e09-4710-bb1a-b5c1cba21af6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.397747 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9tgz\" (UniqueName: \"kubernetes.io/projected/77cf7304-3e09-4710-bb1a-b5c1cba21af6-kube-api-access-w9tgz\") on node \"crc\" DevicePath \"\"" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.397792 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.397804 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77cf7304-3e09-4710-bb1a-b5c1cba21af6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.738098 4957 generic.go:334] "Generic (PLEG): container finished" podID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerID="12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4" exitCode=0 Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.738266 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerDied","Data":"12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4"} Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.738466 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flxkc" event={"ID":"77cf7304-3e09-4710-bb1a-b5c1cba21af6","Type":"ContainerDied","Data":"bd7f01f23bed30fa02db2f8bc7a2d9508a9856b060ef470acef65ecdc6ab95de"} Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.738372 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flxkc" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.738545 4957 scope.go:117] "RemoveContainer" containerID="12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.769256 4957 scope.go:117] "RemoveContainer" containerID="0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.781795 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-flxkc"] Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.791760 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-flxkc"] Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.801593 4957 scope.go:117] "RemoveContainer" containerID="a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.842257 4957 scope.go:117] "RemoveContainer" containerID="12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4" Nov 28 22:42:01 crc kubenswrapper[4957]: E1128 22:42:01.844698 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4\": container with ID starting with 12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4 not found: ID does not exist" containerID="12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.844750 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4"} err="failed to get container status \"12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4\": rpc error: code = NotFound desc = could not find container \"12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4\": container with ID starting with 12ea19a42be0440eb9e2cc347209a767f66d56ebaa3c1521fe87040de8fb7fe4 not found: ID does not exist" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.844784 4957 scope.go:117] "RemoveContainer" containerID="0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303" Nov 28 22:42:01 crc kubenswrapper[4957]: E1128 22:42:01.845077 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303\": container with ID starting with 0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303 not found: ID does not exist" containerID="0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.845105 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303"} err="failed to get container status \"0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303\": rpc error: code = NotFound desc = could not find container \"0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303\": container with ID starting with 0c486f6ca7e1d7a759bf24fe77c156097bd0bae16c5e8555947fd17674240303 not found: ID does not exist" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.845130 4957 scope.go:117] "RemoveContainer" containerID="a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624" Nov 28 22:42:01 crc kubenswrapper[4957]: E1128 22:42:01.845416 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624\": container with ID starting with a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624 not found: ID does not exist" containerID="a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624" Nov 28 22:42:01 crc kubenswrapper[4957]: I1128 22:42:01.845442 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624"} err="failed to get container status \"a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624\": rpc error: code = NotFound desc = could not find container \"a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624\": container with ID starting with a83e0db4b02da1db7e385f535fec64c9886b9e0ee699d513d0b656d72d0cd624 not found: ID does not exist" Nov 28 22:42:02 crc kubenswrapper[4957]: I1128 22:42:02.829736 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" path="/var/lib/kubelet/pods/77cf7304-3e09-4710-bb1a-b5c1cba21af6/volumes" Nov 28 22:42:05 crc kubenswrapper[4957]: I1128 22:42:05.813717 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:42:05 crc kubenswrapper[4957]: E1128 22:42:05.814582 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:42:16 crc kubenswrapper[4957]: I1128 22:42:16.903608 4957 generic.go:334] "Generic (PLEG): container finished" podID="73621557-cdd8-489b-a534-c85c8cb66e46" containerID="b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65" exitCode=0 Nov 28 22:42:16 crc kubenswrapper[4957]: I1128 22:42:16.903703 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" event={"ID":"73621557-cdd8-489b-a534-c85c8cb66e46","Type":"ContainerDied","Data":"b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65"} Nov 28 22:42:16 crc kubenswrapper[4957]: I1128 22:42:16.905018 4957 scope.go:117] "RemoveContainer" containerID="b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65" Nov 28 22:42:17 crc kubenswrapper[4957]: I1128 22:42:17.629358 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pvbdh_must-gather-p8n4m_73621557-cdd8-489b-a534-c85c8cb66e46/gather/0.log" Nov 28 22:42:20 crc kubenswrapper[4957]: I1128 22:42:20.824911 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:42:20 crc kubenswrapper[4957]: E1128 22:42:20.825867 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.402739 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pvbdh/must-gather-p8n4m"] Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.403654 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="copy" containerID="cri-o://387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe" gracePeriod=2 Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.414343 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pvbdh/must-gather-p8n4m"] Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.853147 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pvbdh_must-gather-p8n4m_73621557-cdd8-489b-a534-c85c8cb66e46/copy/0.log" Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.853755 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.957242 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbq7x\" (UniqueName: \"kubernetes.io/projected/73621557-cdd8-489b-a534-c85c8cb66e46-kube-api-access-hbq7x\") pod \"73621557-cdd8-489b-a534-c85c8cb66e46\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.957489 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73621557-cdd8-489b-a534-c85c8cb66e46-must-gather-output\") pod \"73621557-cdd8-489b-a534-c85c8cb66e46\" (UID: \"73621557-cdd8-489b-a534-c85c8cb66e46\") " Nov 28 22:42:29 crc kubenswrapper[4957]: I1128 22:42:29.966740 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73621557-cdd8-489b-a534-c85c8cb66e46-kube-api-access-hbq7x" (OuterVolumeSpecName: "kube-api-access-hbq7x") pod "73621557-cdd8-489b-a534-c85c8cb66e46" (UID: "73621557-cdd8-489b-a534-c85c8cb66e46"). InnerVolumeSpecName "kube-api-access-hbq7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.047429 4957 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pvbdh_must-gather-p8n4m_73621557-cdd8-489b-a534-c85c8cb66e46/copy/0.log" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.047801 4957 generic.go:334] "Generic (PLEG): container finished" podID="73621557-cdd8-489b-a534-c85c8cb66e46" containerID="387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe" exitCode=143 Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.047864 4957 scope.go:117] "RemoveContainer" containerID="387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.048046 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pvbdh/must-gather-p8n4m" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.077363 4957 scope.go:117] "RemoveContainer" containerID="b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.078052 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbq7x\" (UniqueName: \"kubernetes.io/projected/73621557-cdd8-489b-a534-c85c8cb66e46-kube-api-access-hbq7x\") on node \"crc\" DevicePath \"\"" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.129369 4957 scope.go:117] "RemoveContainer" containerID="387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe" Nov 28 22:42:30 crc kubenswrapper[4957]: E1128 22:42:30.138644 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe\": container with ID starting with 387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe not found: ID does not exist" containerID="387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.138699 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe"} err="failed to get container status \"387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe\": rpc error: code = NotFound desc = could not find container \"387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe\": container with ID starting with 387235504dded49ea6109ae81a0e757dae3e99da4a734c73d6700c7c7283f9fe not found: ID does not exist" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.138730 4957 scope.go:117] "RemoveContainer" containerID="b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65" Nov 28 22:42:30 crc kubenswrapper[4957]: E1128 22:42:30.139320 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65\": container with ID starting with b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65 not found: ID does not exist" containerID="b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.139377 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65"} err="failed to get container status \"b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65\": rpc error: code = NotFound desc = could not find container \"b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65\": container with ID starting with b157f23c3a3c642dd73c5e27c72cfbd748b35dae064abcdcc725e959d3a04f65 not found: ID does not exist" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.168683 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73621557-cdd8-489b-a534-c85c8cb66e46-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "73621557-cdd8-489b-a534-c85c8cb66e46" (UID: "73621557-cdd8-489b-a534-c85c8cb66e46"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.180905 4957 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73621557-cdd8-489b-a534-c85c8cb66e46-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 22:42:30 crc kubenswrapper[4957]: I1128 22:42:30.824966 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" path="/var/lib/kubelet/pods/73621557-cdd8-489b-a534-c85c8cb66e46/volumes" Nov 28 22:42:31 crc kubenswrapper[4957]: I1128 22:42:31.813407 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:42:31 crc kubenswrapper[4957]: E1128 22:42:31.813740 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:42:42 crc kubenswrapper[4957]: I1128 22:42:42.813083 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:42:42 crc kubenswrapper[4957]: E1128 22:42:42.813888 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:42:54 crc kubenswrapper[4957]: I1128 22:42:54.814020 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:42:54 crc kubenswrapper[4957]: E1128 22:42:54.815109 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:43:07 crc kubenswrapper[4957]: I1128 22:43:07.812768 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:43:07 crc kubenswrapper[4957]: E1128 22:43:07.814023 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:43:20 crc kubenswrapper[4957]: I1128 22:43:20.827271 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:43:20 crc kubenswrapper[4957]: E1128 22:43:20.828090 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:43:32 crc kubenswrapper[4957]: I1128 22:43:32.073330 4957 scope.go:117] "RemoveContainer" containerID="1fa7ca4de12ae0f0af8e21af4f6c258223ee1218d94c840d2875a2ed965fce22" Nov 28 22:43:32 crc kubenswrapper[4957]: I1128 22:43:32.813372 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:43:32 crc kubenswrapper[4957]: E1128 22:43:32.813893 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.506355 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn95"] Nov 28 22:43:38 crc kubenswrapper[4957]: E1128 22:43:38.507502 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="extract-utilities" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507520 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="extract-utilities" Nov 28 22:43:38 crc kubenswrapper[4957]: E1128 22:43:38.507546 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="gather" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507552 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="gather" Nov 28 22:43:38 crc kubenswrapper[4957]: E1128 22:43:38.507567 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="copy" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507577 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="copy" Nov 28 22:43:38 crc kubenswrapper[4957]: E1128 22:43:38.507589 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="extract-content" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507594 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="extract-content" Nov 28 22:43:38 crc kubenswrapper[4957]: E1128 22:43:38.507622 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="registry-server" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507628 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="registry-server" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507888 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="copy" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507912 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="77cf7304-3e09-4710-bb1a-b5c1cba21af6" containerName="registry-server" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.507927 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="73621557-cdd8-489b-a534-c85c8cb66e46" containerName="gather" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.510080 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.538259 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn95"] Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.561392 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-catalog-content\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.561519 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbb7\" (UniqueName: \"kubernetes.io/projected/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-kube-api-access-bvbb7\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.561558 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-utilities\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.663473 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvbb7\" (UniqueName: \"kubernetes.io/projected/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-kube-api-access-bvbb7\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.663819 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-utilities\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.663983 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-catalog-content\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.664289 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-utilities\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.664598 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-catalog-content\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.685495 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvbb7\" (UniqueName: \"kubernetes.io/projected/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-kube-api-access-bvbb7\") pod \"redhat-marketplace-pmn95\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:38 crc kubenswrapper[4957]: I1128 22:43:38.857557 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:39 crc kubenswrapper[4957]: I1128 22:43:39.311294 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn95"] Nov 28 22:43:39 crc kubenswrapper[4957]: I1128 22:43:39.776537 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerID="0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9" exitCode=0 Nov 28 22:43:39 crc kubenswrapper[4957]: I1128 22:43:39.776647 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn95" event={"ID":"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e","Type":"ContainerDied","Data":"0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9"} Nov 28 22:43:39 crc kubenswrapper[4957]: I1128 22:43:39.776846 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn95" event={"ID":"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e","Type":"ContainerStarted","Data":"bdf0973e351c0752062cf12f97c68fa10ba18bfc2910678453cbc4824f310a37"} Nov 28 22:43:41 crc kubenswrapper[4957]: E1128 22:43:41.340112 4957 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b80a21_8f23_4ed4_ba9d_72940ba7f54e.slice/crio-4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b80a21_8f23_4ed4_ba9d_72940ba7f54e.slice/crio-conmon-4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975.scope\": RecentStats: unable to find data in memory cache]" Nov 28 22:43:41 crc kubenswrapper[4957]: I1128 22:43:41.803171 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerID="4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975" exitCode=0 Nov 28 22:43:41 crc kubenswrapper[4957]: I1128 22:43:41.803231 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn95" event={"ID":"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e","Type":"ContainerDied","Data":"4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975"} Nov 28 22:43:42 crc kubenswrapper[4957]: I1128 22:43:42.834303 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn95" event={"ID":"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e","Type":"ContainerStarted","Data":"34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb"} Nov 28 22:43:42 crc kubenswrapper[4957]: I1128 22:43:42.859162 4957 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmn95" podStartSLOduration=2.451296931 podStartE2EDuration="4.859145507s" podCreationTimestamp="2025-11-28 22:43:38 +0000 UTC" firstStartedPulling="2025-11-28 22:43:39.778060046 +0000 UTC m=+6859.246707955" lastFinishedPulling="2025-11-28 22:43:42.185908622 +0000 UTC m=+6861.654556531" observedRunningTime="2025-11-28 22:43:42.842022634 +0000 UTC m=+6862.310670533" watchObservedRunningTime="2025-11-28 22:43:42.859145507 +0000 UTC m=+6862.327793416" Nov 28 22:43:46 crc kubenswrapper[4957]: I1128 22:43:46.813308 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:43:46 crc kubenswrapper[4957]: E1128 22:43:46.814164 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:43:48 crc kubenswrapper[4957]: I1128 22:43:48.858033 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:48 crc kubenswrapper[4957]: I1128 22:43:48.858387 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:48 crc kubenswrapper[4957]: I1128 22:43:48.906397 4957 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:48 crc kubenswrapper[4957]: I1128 22:43:48.956349 4957 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:49 crc kubenswrapper[4957]: I1128 22:43:49.142611 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn95"] Nov 28 22:43:50 crc kubenswrapper[4957]: I1128 22:43:50.913221 4957 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pmn95" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="registry-server" containerID="cri-o://34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb" gracePeriod=2 Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.414296 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.558371 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvbb7\" (UniqueName: \"kubernetes.io/projected/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-kube-api-access-bvbb7\") pod \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.559115 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-utilities\") pod \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.559205 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-catalog-content\") pod \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\" (UID: \"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e\") " Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.559840 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-utilities" (OuterVolumeSpecName: "utilities") pod "f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" (UID: "f9b80a21-8f23-4ed4-ba9d-72940ba7f54e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.561075 4957 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.565446 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-kube-api-access-bvbb7" (OuterVolumeSpecName: "kube-api-access-bvbb7") pod "f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" (UID: "f9b80a21-8f23-4ed4-ba9d-72940ba7f54e"). InnerVolumeSpecName "kube-api-access-bvbb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.577737 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" (UID: "f9b80a21-8f23-4ed4-ba9d-72940ba7f54e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.662856 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvbb7\" (UniqueName: \"kubernetes.io/projected/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-kube-api-access-bvbb7\") on node \"crc\" DevicePath \"\"" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.662903 4957 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.927835 4957 generic.go:334] "Generic (PLEG): container finished" podID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerID="34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb" exitCode=0 Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.927895 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn95" event={"ID":"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e","Type":"ContainerDied","Data":"34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb"} Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.927932 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn95" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.927946 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn95" event={"ID":"f9b80a21-8f23-4ed4-ba9d-72940ba7f54e","Type":"ContainerDied","Data":"bdf0973e351c0752062cf12f97c68fa10ba18bfc2910678453cbc4824f310a37"} Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.927969 4957 scope.go:117] "RemoveContainer" containerID="34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.947039 4957 scope.go:117] "RemoveContainer" containerID="4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975" Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.964562 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn95"] Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.976004 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn95"] Nov 28 22:43:51 crc kubenswrapper[4957]: I1128 22:43:51.982010 4957 scope.go:117] "RemoveContainer" containerID="0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.031280 4957 scope.go:117] "RemoveContainer" containerID="34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb" Nov 28 22:43:52 crc kubenswrapper[4957]: E1128 22:43:52.031819 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb\": container with ID starting with 34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb not found: ID does not exist" containerID="34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.031860 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb"} err="failed to get container status \"34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb\": rpc error: code = NotFound desc = could not find container \"34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb\": container with ID starting with 34ace7f70bb54a5b9830068c60976317e224c33a37aeeaea725fc5bb122b68fb not found: ID does not exist" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.031891 4957 scope.go:117] "RemoveContainer" containerID="4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975" Nov 28 22:43:52 crc kubenswrapper[4957]: E1128 22:43:52.032340 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975\": container with ID starting with 4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975 not found: ID does not exist" containerID="4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.032392 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975"} err="failed to get container status \"4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975\": rpc error: code = NotFound desc = could not find container \"4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975\": container with ID starting with 4d7caffe5e2af2bccbb96e92986fc87a1f526d9a5dfe44302a99b3fb7d808975 not found: ID does not exist" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.032422 4957 scope.go:117] "RemoveContainer" containerID="0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9" Nov 28 22:43:52 crc kubenswrapper[4957]: E1128 22:43:52.032931 4957 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9\": container with ID starting with 0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9 not found: ID does not exist" containerID="0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.032966 4957 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9"} err="failed to get container status \"0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9\": rpc error: code = NotFound desc = could not find container \"0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9\": container with ID starting with 0153af1efebaca95dc066bcba4e4335b5d8767175287fe5f0ed26421cb7dadc9 not found: ID does not exist" Nov 28 22:43:52 crc kubenswrapper[4957]: I1128 22:43:52.824587 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" path="/var/lib/kubelet/pods/f9b80a21-8f23-4ed4-ba9d-72940ba7f54e/volumes" Nov 28 22:43:57 crc kubenswrapper[4957]: I1128 22:43:57.813664 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:43:57 crc kubenswrapper[4957]: E1128 22:43:57.814821 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:44:11 crc kubenswrapper[4957]: I1128 22:44:11.813517 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:44:11 crc kubenswrapper[4957]: E1128 22:44:11.814183 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:44:23 crc kubenswrapper[4957]: I1128 22:44:23.813036 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:44:23 crc kubenswrapper[4957]: E1128 22:44:23.813935 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:44:38 crc kubenswrapper[4957]: I1128 22:44:38.814089 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:44:38 crc kubenswrapper[4957]: E1128 22:44:38.815442 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:44:51 crc kubenswrapper[4957]: I1128 22:44:51.813860 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:44:51 crc kubenswrapper[4957]: E1128 22:44:51.816019 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.203479 4957 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs"] Nov 28 22:45:00 crc kubenswrapper[4957]: E1128 22:45:00.204802 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="registry-server" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.204825 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="registry-server" Nov 28 22:45:00 crc kubenswrapper[4957]: E1128 22:45:00.204865 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="extract-utilities" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.204874 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="extract-utilities" Nov 28 22:45:00 crc kubenswrapper[4957]: E1128 22:45:00.204904 4957 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="extract-content" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.204912 4957 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="extract-content" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.205190 4957 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b80a21-8f23-4ed4-ba9d-72940ba7f54e" containerName="registry-server" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.206248 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.210132 4957 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.215848 4957 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.220888 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs"] Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.262762 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/540daab6-9057-483a-9ae4-4fe140699357-config-volume\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.263074 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/540daab6-9057-483a-9ae4-4fe140699357-secret-volume\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.263369 4957 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx59b\" (UniqueName: \"kubernetes.io/projected/540daab6-9057-483a-9ae4-4fe140699357-kube-api-access-nx59b\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.366024 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/540daab6-9057-483a-9ae4-4fe140699357-secret-volume\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.366171 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx59b\" (UniqueName: \"kubernetes.io/projected/540daab6-9057-483a-9ae4-4fe140699357-kube-api-access-nx59b\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.366288 4957 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/540daab6-9057-483a-9ae4-4fe140699357-config-volume\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.367121 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/540daab6-9057-483a-9ae4-4fe140699357-config-volume\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.372677 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/540daab6-9057-483a-9ae4-4fe140699357-secret-volume\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.383591 4957 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx59b\" (UniqueName: \"kubernetes.io/projected/540daab6-9057-483a-9ae4-4fe140699357-kube-api-access-nx59b\") pod \"collect-profiles-29406165-wpwvs\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:00 crc kubenswrapper[4957]: I1128 22:45:00.532424 4957 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:01 crc kubenswrapper[4957]: I1128 22:45:01.002331 4957 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs"] Nov 28 22:45:01 crc kubenswrapper[4957]: I1128 22:45:01.852379 4957 generic.go:334] "Generic (PLEG): container finished" podID="540daab6-9057-483a-9ae4-4fe140699357" containerID="578d9bddde311154c7d186d6827ce0e2fc7e135ee4e089af6872fa7467a645ef" exitCode=0 Nov 28 22:45:01 crc kubenswrapper[4957]: I1128 22:45:01.852790 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" event={"ID":"540daab6-9057-483a-9ae4-4fe140699357","Type":"ContainerDied","Data":"578d9bddde311154c7d186d6827ce0e2fc7e135ee4e089af6872fa7467a645ef"} Nov 28 22:45:01 crc kubenswrapper[4957]: I1128 22:45:01.852941 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" event={"ID":"540daab6-9057-483a-9ae4-4fe140699357","Type":"ContainerStarted","Data":"7038cffb5f81c93a0798670ad75d9a98d9be327d58721981aab3945ebbaddbf9"} Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.303877 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.334014 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/540daab6-9057-483a-9ae4-4fe140699357-secret-volume\") pod \"540daab6-9057-483a-9ae4-4fe140699357\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.334159 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/540daab6-9057-483a-9ae4-4fe140699357-config-volume\") pod \"540daab6-9057-483a-9ae4-4fe140699357\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.334304 4957 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx59b\" (UniqueName: \"kubernetes.io/projected/540daab6-9057-483a-9ae4-4fe140699357-kube-api-access-nx59b\") pod \"540daab6-9057-483a-9ae4-4fe140699357\" (UID: \"540daab6-9057-483a-9ae4-4fe140699357\") " Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.335157 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540daab6-9057-483a-9ae4-4fe140699357-config-volume" (OuterVolumeSpecName: "config-volume") pod "540daab6-9057-483a-9ae4-4fe140699357" (UID: "540daab6-9057-483a-9ae4-4fe140699357"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.340489 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540daab6-9057-483a-9ae4-4fe140699357-kube-api-access-nx59b" (OuterVolumeSpecName: "kube-api-access-nx59b") pod "540daab6-9057-483a-9ae4-4fe140699357" (UID: "540daab6-9057-483a-9ae4-4fe140699357"). InnerVolumeSpecName "kube-api-access-nx59b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.348072 4957 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540daab6-9057-483a-9ae4-4fe140699357-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "540daab6-9057-483a-9ae4-4fe140699357" (UID: "540daab6-9057-483a-9ae4-4fe140699357"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.437458 4957 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/540daab6-9057-483a-9ae4-4fe140699357-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.437492 4957 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/540daab6-9057-483a-9ae4-4fe140699357-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.437503 4957 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx59b\" (UniqueName: \"kubernetes.io/projected/540daab6-9057-483a-9ae4-4fe140699357-kube-api-access-nx59b\") on node \"crc\" DevicePath \"\"" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.884340 4957 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" event={"ID":"540daab6-9057-483a-9ae4-4fe140699357","Type":"ContainerDied","Data":"7038cffb5f81c93a0798670ad75d9a98d9be327d58721981aab3945ebbaddbf9"} Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.884410 4957 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7038cffb5f81c93a0798670ad75d9a98d9be327d58721981aab3945ebbaddbf9" Nov 28 22:45:03 crc kubenswrapper[4957]: I1128 22:45:03.884421 4957 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29406165-wpwvs" Nov 28 22:45:04 crc kubenswrapper[4957]: I1128 22:45:04.382245 4957 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w"] Nov 28 22:45:04 crc kubenswrapper[4957]: I1128 22:45:04.393690 4957 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29406120-ctv2w"] Nov 28 22:45:04 crc kubenswrapper[4957]: I1128 22:45:04.825677 4957 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdc6630-d96c-4f3c-a2c6-6e304141de0a" path="/var/lib/kubelet/pods/dcdc6630-d96c-4f3c-a2c6-6e304141de0a/volumes" Nov 28 22:45:06 crc kubenswrapper[4957]: I1128 22:45:06.813520 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:45:06 crc kubenswrapper[4957]: E1128 22:45:06.814246 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:45:18 crc kubenswrapper[4957]: I1128 22:45:18.817835 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:45:18 crc kubenswrapper[4957]: E1128 22:45:18.818565 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:45:29 crc kubenswrapper[4957]: I1128 22:45:29.813124 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:45:29 crc kubenswrapper[4957]: E1128 22:45:29.814301 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:45:32 crc kubenswrapper[4957]: I1128 22:45:32.167867 4957 scope.go:117] "RemoveContainer" containerID="34532db8d1e36fcb92be65f99aa36ef631ad79c9d272ef9f66eb66a2840e58db" Nov 28 22:45:40 crc kubenswrapper[4957]: I1128 22:45:40.828365 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:45:40 crc kubenswrapper[4957]: E1128 22:45:40.829381 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827" Nov 28 22:45:55 crc kubenswrapper[4957]: I1128 22:45:55.813882 4957 scope.go:117] "RemoveContainer" containerID="d965c1eec682f5f58f14fb3ab35cca3fbcf982801e9da9801fe78ee501127786" Nov 28 22:45:55 crc kubenswrapper[4957]: E1128 22:45:55.814626 4957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hq5x2_openshift-machine-config-operator(8d41c2ca-d1ca-46b0-be19-6e4693f0b827)\"" pod="openshift-machine-config-operator/machine-config-daemon-hq5x2" podUID="8d41c2ca-d1ca-46b0-be19-6e4693f0b827"